00:00:00.001 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v23.11" build number 144 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3645 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.124 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.125 The recommended git tool is: git 00:00:00.125 using credential 00000000-0000-0000-0000-000000000002 00:00:00.126 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.143 Fetching changes from the remote Git repository 00:00:00.145 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.164 Using shallow fetch with depth 1 00:00:00.164 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.164 > git --version # timeout=10 00:00:00.193 > git --version # 'git version 2.39.2' 00:00:00.193 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.218 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.218 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.499 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.510 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.521 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.521 > git config core.sparsecheckout # timeout=10 00:00:04.533 > git read-tree -mu HEAD # timeout=10 00:00:04.549 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.572 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.572 > git rev-list --no-walk 6d4840695fb479ead742a39eb3a563a20cd15407 # timeout=10 00:00:04.670 [Pipeline] Start of Pipeline 00:00:04.684 [Pipeline] library 00:00:04.686 Loading library shm_lib@master 00:00:04.686 Library shm_lib@master is cached. Copying from home. 00:00:04.699 [Pipeline] node 00:00:04.725 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:04.727 [Pipeline] { 00:00:04.738 [Pipeline] catchError 00:00:04.739 [Pipeline] { 00:00:04.752 [Pipeline] wrap 00:00:04.762 [Pipeline] { 00:00:04.770 [Pipeline] stage 00:00:04.772 [Pipeline] { (Prologue) 00:00:04.790 [Pipeline] echo 00:00:04.792 Node: VM-host-WFP7 00:00:04.798 [Pipeline] cleanWs 00:00:04.807 [WS-CLEANUP] Deleting project workspace... 00:00:04.807 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.813 [WS-CLEANUP] done 00:00:05.004 [Pipeline] setCustomBuildProperty 00:00:05.088 [Pipeline] httpRequest 00:00:05.442 [Pipeline] echo 00:00:05.444 Sorcerer 10.211.164.20 is alive 00:00:05.451 [Pipeline] retry 00:00:05.452 [Pipeline] { 00:00:05.463 [Pipeline] httpRequest 00:00:05.469 HttpMethod: GET 00:00:05.469 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.470 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.476 Response Code: HTTP/1.1 200 OK 00:00:05.477 Success: Status code 200 is in the accepted range: 200,404 00:00:05.477 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.272 [Pipeline] } 00:00:06.288 [Pipeline] // retry 00:00:06.295 [Pipeline] sh 00:00:06.579 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.593 [Pipeline] httpRequest 00:00:07.305 [Pipeline] echo 00:00:07.306 Sorcerer 10.211.164.20 is alive 00:00:07.315 [Pipeline] retry 00:00:07.316 [Pipeline] { 00:00:07.327 [Pipeline] httpRequest 00:00:07.332 HttpMethod: GET 00:00:07.332 URL: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:07.333 Sending request to url: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:07.334 Response Code: HTTP/1.1 200 OK 00:00:07.335 Success: Status code 200 is in the accepted range: 200,404 00:00:07.335 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:29.837 [Pipeline] } 00:00:29.855 [Pipeline] // retry 00:00:29.862 [Pipeline] sh 00:00:30.147 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:32.767 [Pipeline] sh 00:00:33.052 + git -C spdk log --oneline -n5 00:00:33.052 b18e1bd62 version: v24.09.1-pre 00:00:33.052 19524ad45 version: v24.09 00:00:33.052 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:00:33.052 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:00:33.052 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:00:33.074 [Pipeline] withCredentials 00:00:33.086 > git --version # timeout=10 00:00:33.099 > git --version # 'git version 2.39.2' 00:00:33.117 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:33.119 [Pipeline] { 00:00:33.131 [Pipeline] retry 00:00:33.134 [Pipeline] { 00:00:33.152 [Pipeline] sh 00:00:33.437 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:33.711 [Pipeline] } 00:00:33.732 [Pipeline] // retry 00:00:33.738 [Pipeline] } 00:00:33.756 [Pipeline] // withCredentials 00:00:33.766 [Pipeline] httpRequest 00:00:34.146 [Pipeline] echo 00:00:34.155 Sorcerer 10.211.164.20 is alive 00:00:34.174 [Pipeline] retry 00:00:34.176 [Pipeline] { 00:00:34.183 [Pipeline] httpRequest 00:00:34.186 HttpMethod: GET 00:00:34.187 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:34.187 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:34.201 Response Code: HTTP/1.1 200 OK 00:00:34.201 Success: Status code 200 is in the accepted range: 200,404 00:00:34.201 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:25.423 [Pipeline] } 00:01:25.443 [Pipeline] // retry 00:01:25.453 [Pipeline] sh 00:01:25.737 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:27.129 [Pipeline] sh 00:01:27.415 + git -C dpdk log --oneline -n5 00:01:27.415 eeb0605f11 version: 23.11.0 00:01:27.415 238778122a doc: update release notes for 23.11 00:01:27.415 46aa6b3cfc doc: fix description of RSS features 00:01:27.415 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:27.415 7e421ae345 devtools: support skipping forbid rule check 00:01:27.437 [Pipeline] writeFile 00:01:27.454 [Pipeline] sh 00:01:27.737 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:27.750 [Pipeline] sh 00:01:28.034 + cat autorun-spdk.conf 00:01:28.034 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.034 SPDK_RUN_ASAN=1 00:01:28.034 SPDK_RUN_UBSAN=1 00:01:28.034 SPDK_TEST_RAID=1 00:01:28.034 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:28.034 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:28.034 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:28.041 RUN_NIGHTLY=1 00:01:28.043 [Pipeline] } 00:01:28.057 [Pipeline] // stage 00:01:28.073 [Pipeline] stage 00:01:28.075 [Pipeline] { (Run VM) 00:01:28.088 [Pipeline] sh 00:01:28.387 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:28.388 + echo 'Start stage prepare_nvme.sh' 00:01:28.388 Start stage prepare_nvme.sh 00:01:28.388 + [[ -n 5 ]] 00:01:28.388 + disk_prefix=ex5 00:01:28.388 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:28.388 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:28.388 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:28.388 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.388 ++ SPDK_RUN_ASAN=1 00:01:28.388 ++ SPDK_RUN_UBSAN=1 00:01:28.388 ++ SPDK_TEST_RAID=1 00:01:28.388 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:28.388 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:28.388 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:28.388 ++ RUN_NIGHTLY=1 00:01:28.388 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:28.388 + nvme_files=() 00:01:28.388 + declare -A nvme_files 00:01:28.388 + backend_dir=/var/lib/libvirt/images/backends 00:01:28.388 + nvme_files['nvme.img']=5G 00:01:28.388 + nvme_files['nvme-cmb.img']=5G 00:01:28.388 + nvme_files['nvme-multi0.img']=4G 00:01:28.388 + nvme_files['nvme-multi1.img']=4G 00:01:28.388 + nvme_files['nvme-multi2.img']=4G 00:01:28.388 + nvme_files['nvme-openstack.img']=8G 00:01:28.388 + nvme_files['nvme-zns.img']=5G 00:01:28.388 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:28.388 + (( SPDK_TEST_FTL == 1 )) 00:01:28.388 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:28.388 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:28.388 + for nvme in "${!nvme_files[@]}" 00:01:28.388 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:01:28.388 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:28.388 + for nvme in "${!nvme_files[@]}" 00:01:28.388 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:01:28.388 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:28.388 + for nvme in "${!nvme_files[@]}" 00:01:28.388 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:01:28.388 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:28.388 + for nvme in "${!nvme_files[@]}" 00:01:28.388 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:01:28.388 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:28.388 + for nvme in "${!nvme_files[@]}" 00:01:28.388 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:01:28.388 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:28.388 + for nvme in "${!nvme_files[@]}" 00:01:28.388 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:01:28.388 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:28.388 + for nvme in "${!nvme_files[@]}" 00:01:28.388 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:01:28.663 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:28.663 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:01:28.663 + echo 'End stage prepare_nvme.sh' 00:01:28.663 End stage prepare_nvme.sh 00:01:28.675 [Pipeline] sh 00:01:28.959 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:28.959 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:01:28.959 00:01:28.959 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:28.959 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:28.959 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:28.959 HELP=0 00:01:28.959 DRY_RUN=0 00:01:28.959 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:01:28.959 NVME_DISKS_TYPE=nvme,nvme, 00:01:28.959 NVME_AUTO_CREATE=0 00:01:28.959 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:01:28.959 NVME_CMB=,, 00:01:28.959 NVME_PMR=,, 00:01:28.959 NVME_ZNS=,, 00:01:28.959 NVME_MS=,, 00:01:28.959 NVME_FDP=,, 00:01:28.959 SPDK_VAGRANT_DISTRO=fedora39 00:01:28.959 SPDK_VAGRANT_VMCPU=10 00:01:28.959 SPDK_VAGRANT_VMRAM=12288 00:01:28.959 SPDK_VAGRANT_PROVIDER=libvirt 00:01:28.959 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:28.959 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:28.959 SPDK_OPENSTACK_NETWORK=0 00:01:28.959 VAGRANT_PACKAGE_BOX=0 00:01:28.959 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:28.959 FORCE_DISTRO=true 00:01:28.959 VAGRANT_BOX_VERSION= 00:01:28.959 EXTRA_VAGRANTFILES= 00:01:28.959 NIC_MODEL=virtio 00:01:28.959 00:01:28.959 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:28.959 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:31.499 Bringing machine 'default' up with 'libvirt' provider... 00:01:31.499 ==> default: Creating image (snapshot of base box volume). 00:01:31.760 ==> default: Creating domain with the following settings... 00:01:31.760 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732018896_d25ed8dc110143e8c523 00:01:31.760 ==> default: -- Domain type: kvm 00:01:31.760 ==> default: -- Cpus: 10 00:01:31.760 ==> default: -- Feature: acpi 00:01:31.760 ==> default: -- Feature: apic 00:01:31.760 ==> default: -- Feature: pae 00:01:31.760 ==> default: -- Memory: 12288M 00:01:31.760 ==> default: -- Memory Backing: hugepages: 00:01:31.760 ==> default: -- Management MAC: 00:01:31.760 ==> default: -- Loader: 00:01:31.760 ==> default: -- Nvram: 00:01:31.760 ==> default: -- Base box: spdk/fedora39 00:01:31.760 ==> default: -- Storage pool: default 00:01:31.760 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732018896_d25ed8dc110143e8c523.img (20G) 00:01:31.760 ==> default: -- Volume Cache: default 00:01:31.760 ==> default: -- Kernel: 00:01:31.760 ==> default: -- Initrd: 00:01:31.760 ==> default: -- Graphics Type: vnc 00:01:31.760 ==> default: -- Graphics Port: -1 00:01:31.760 ==> default: -- Graphics IP: 127.0.0.1 00:01:31.760 ==> default: -- Graphics Password: Not defined 00:01:31.760 ==> default: -- Video Type: cirrus 00:01:31.760 ==> default: -- Video VRAM: 9216 00:01:31.760 ==> default: -- Sound Type: 00:01:31.760 ==> default: -- Keymap: en-us 00:01:31.760 ==> default: -- TPM Path: 00:01:31.760 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:31.760 ==> default: -- Command line args: 00:01:31.760 ==> default: -> value=-device, 00:01:31.760 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:31.760 ==> default: -> value=-drive, 00:01:31.760 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:01:31.760 ==> default: -> value=-device, 00:01:31.760 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:31.760 ==> default: -> value=-device, 00:01:31.760 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:31.760 ==> default: -> value=-drive, 00:01:31.760 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:31.760 ==> default: -> value=-device, 00:01:31.760 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:31.760 ==> default: -> value=-drive, 00:01:31.760 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:31.760 ==> default: -> value=-device, 00:01:31.760 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:31.760 ==> default: -> value=-drive, 00:01:31.760 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:31.760 ==> default: -> value=-device, 00:01:31.760 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:31.760 ==> default: Creating shared folders metadata... 00:01:31.760 ==> default: Starting domain. 00:01:33.671 ==> default: Waiting for domain to get an IP address... 00:01:48.566 ==> default: Waiting for SSH to become available... 00:01:49.949 ==> default: Configuring and enabling network interfaces... 00:01:56.550 default: SSH address: 192.168.121.206:22 00:01:56.550 default: SSH username: vagrant 00:01:56.550 default: SSH auth method: private key 00:01:59.094 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:07.220 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:12.503 ==> default: Mounting SSHFS shared folder... 00:02:15.041 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:15.041 ==> default: Checking Mount.. 00:02:16.449 ==> default: Folder Successfully Mounted! 00:02:16.449 ==> default: Running provisioner: file... 00:02:17.830 default: ~/.gitconfig => .gitconfig 00:02:18.091 00:02:18.091 SUCCESS! 00:02:18.091 00:02:18.091 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:18.091 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:18.091 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:18.091 00:02:18.100 [Pipeline] } 00:02:18.114 [Pipeline] // stage 00:02:18.123 [Pipeline] dir 00:02:18.123 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:18.125 [Pipeline] { 00:02:18.137 [Pipeline] catchError 00:02:18.139 [Pipeline] { 00:02:18.150 [Pipeline] sh 00:02:18.436 + vagrant ssh-config --host vagrant 00:02:18.436 + sed -ne /^Host/,$p 00:02:18.436 + tee ssh_conf 00:02:20.977 Host vagrant 00:02:20.977 HostName 192.168.121.206 00:02:20.977 User vagrant 00:02:20.977 Port 22 00:02:20.977 UserKnownHostsFile /dev/null 00:02:20.977 StrictHostKeyChecking no 00:02:20.977 PasswordAuthentication no 00:02:20.977 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:20.977 IdentitiesOnly yes 00:02:20.977 LogLevel FATAL 00:02:20.977 ForwardAgent yes 00:02:20.977 ForwardX11 yes 00:02:20.977 00:02:20.992 [Pipeline] withEnv 00:02:20.995 [Pipeline] { 00:02:21.009 [Pipeline] sh 00:02:21.362 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:21.362 source /etc/os-release 00:02:21.362 [[ -e /image.version ]] && img=$(< /image.version) 00:02:21.362 # Minimal, systemd-like check. 00:02:21.362 if [[ -e /.dockerenv ]]; then 00:02:21.362 # Clear garbage from the node's name: 00:02:21.362 # agt-er_autotest_547-896 -> autotest_547-896 00:02:21.362 # $HOSTNAME is the actual container id 00:02:21.362 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:21.362 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:21.362 # We can assume this is a mount from a host where container is running, 00:02:21.362 # so fetch its hostname to easily identify the target swarm worker. 00:02:21.362 container="$(< /etc/hostname) ($agent)" 00:02:21.362 else 00:02:21.362 # Fallback 00:02:21.362 container=$agent 00:02:21.362 fi 00:02:21.362 fi 00:02:21.362 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:21.362 00:02:21.634 [Pipeline] } 00:02:21.650 [Pipeline] // withEnv 00:02:21.658 [Pipeline] setCustomBuildProperty 00:02:21.672 [Pipeline] stage 00:02:21.674 [Pipeline] { (Tests) 00:02:21.691 [Pipeline] sh 00:02:21.973 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:22.248 [Pipeline] sh 00:02:22.536 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:22.810 [Pipeline] timeout 00:02:22.810 Timeout set to expire in 1 hr 30 min 00:02:22.812 [Pipeline] { 00:02:22.827 [Pipeline] sh 00:02:23.110 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:23.679 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:02:23.692 [Pipeline] sh 00:02:23.976 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:24.246 [Pipeline] sh 00:02:24.525 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:24.804 [Pipeline] sh 00:02:25.087 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:25.348 ++ readlink -f spdk_repo 00:02:25.348 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:25.348 + [[ -n /home/vagrant/spdk_repo ]] 00:02:25.348 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:25.348 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:25.348 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:25.348 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:25.348 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:25.348 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:25.348 + cd /home/vagrant/spdk_repo 00:02:25.348 + source /etc/os-release 00:02:25.348 ++ NAME='Fedora Linux' 00:02:25.348 ++ VERSION='39 (Cloud Edition)' 00:02:25.348 ++ ID=fedora 00:02:25.348 ++ VERSION_ID=39 00:02:25.348 ++ VERSION_CODENAME= 00:02:25.348 ++ PLATFORM_ID=platform:f39 00:02:25.348 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:25.348 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:25.348 ++ LOGO=fedora-logo-icon 00:02:25.348 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:25.348 ++ HOME_URL=https://fedoraproject.org/ 00:02:25.348 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:25.348 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:25.348 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:25.348 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:25.348 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:25.348 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:25.348 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:25.348 ++ SUPPORT_END=2024-11-12 00:02:25.348 ++ VARIANT='Cloud Edition' 00:02:25.348 ++ VARIANT_ID=cloud 00:02:25.348 + uname -a 00:02:25.348 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:25.348 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:25.917 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:25.917 Hugepages 00:02:25.917 node hugesize free / total 00:02:25.917 node0 1048576kB 0 / 0 00:02:25.917 node0 2048kB 0 / 0 00:02:25.917 00:02:25.917 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:25.917 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:25.917 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:25.917 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:25.917 + rm -f /tmp/spdk-ld-path 00:02:25.917 + source autorun-spdk.conf 00:02:25.917 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:25.917 ++ SPDK_RUN_ASAN=1 00:02:25.917 ++ SPDK_RUN_UBSAN=1 00:02:25.917 ++ SPDK_TEST_RAID=1 00:02:25.917 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:25.917 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:25.917 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:25.917 ++ RUN_NIGHTLY=1 00:02:25.917 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:25.917 + [[ -n '' ]] 00:02:25.917 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:25.918 + for M in /var/spdk/build-*-manifest.txt 00:02:25.918 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:25.918 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:25.918 + for M in /var/spdk/build-*-manifest.txt 00:02:25.918 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:25.918 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:25.918 + for M in /var/spdk/build-*-manifest.txt 00:02:25.918 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:25.918 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:26.178 ++ uname 00:02:26.178 + [[ Linux == \L\i\n\u\x ]] 00:02:26.178 + sudo dmesg -T 00:02:26.178 + sudo dmesg --clear 00:02:26.178 + dmesg_pid=6163 00:02:26.178 + [[ Fedora Linux == FreeBSD ]] 00:02:26.178 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:26.178 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:26.178 + sudo dmesg -Tw 00:02:26.178 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:26.178 + [[ -x /usr/src/fio-static/fio ]] 00:02:26.178 + export FIO_BIN=/usr/src/fio-static/fio 00:02:26.178 + FIO_BIN=/usr/src/fio-static/fio 00:02:26.178 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:26.178 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:26.178 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:26.178 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:26.178 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:26.178 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:26.178 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:26.178 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:26.178 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:26.178 Test configuration: 00:02:26.178 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:26.178 SPDK_RUN_ASAN=1 00:02:26.178 SPDK_RUN_UBSAN=1 00:02:26.178 SPDK_TEST_RAID=1 00:02:26.178 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:26.178 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:26.178 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:26.178 RUN_NIGHTLY=1 12:22:31 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:26.178 12:22:31 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:26.178 12:22:31 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:26.178 12:22:31 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:26.178 12:22:31 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:26.178 12:22:31 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:26.178 12:22:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.178 12:22:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.178 12:22:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.178 12:22:31 -- paths/export.sh@5 -- $ export PATH 00:02:26.178 12:22:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.178 12:22:31 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:26.178 12:22:31 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:26.178 12:22:31 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1732018951.XXXXXX 00:02:26.178 12:22:31 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1732018951.JO7tJ5 00:02:26.178 12:22:31 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:26.178 12:22:31 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:02:26.178 12:22:31 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:26.178 12:22:31 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:26.178 12:22:31 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:26.179 12:22:31 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:26.179 12:22:31 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:26.179 12:22:31 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:26.179 12:22:31 -- common/autotest_common.sh@10 -- $ set +x 00:02:26.439 12:22:31 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:26.439 12:22:31 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:26.439 12:22:31 -- pm/common@17 -- $ local monitor 00:02:26.439 12:22:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.439 12:22:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.439 12:22:31 -- pm/common@21 -- $ date +%s 00:02:26.439 12:22:31 -- pm/common@25 -- $ sleep 1 00:02:26.439 12:22:31 -- pm/common@21 -- $ date +%s 00:02:26.439 12:22:31 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732018951 00:02:26.439 12:22:31 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732018951 00:02:26.439 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732018951_collect-cpu-load.pm.log 00:02:26.439 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732018951_collect-vmstat.pm.log 00:02:27.381 12:22:32 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:27.381 12:22:32 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:27.381 12:22:32 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:27.381 12:22:32 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:27.381 12:22:32 -- spdk/autobuild.sh@16 -- $ date -u 00:02:27.381 Tue Nov 19 12:22:32 PM UTC 2024 00:02:27.381 12:22:32 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:27.381 v24.09-1-gb18e1bd62 00:02:27.381 12:22:32 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:27.381 12:22:32 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:27.381 12:22:32 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:27.381 12:22:32 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:27.381 12:22:32 -- common/autotest_common.sh@10 -- $ set +x 00:02:27.381 ************************************ 00:02:27.381 START TEST asan 00:02:27.381 ************************************ 00:02:27.381 using asan 00:02:27.381 12:22:32 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:27.381 00:02:27.381 real 0m0.000s 00:02:27.381 user 0m0.000s 00:02:27.381 sys 0m0.000s 00:02:27.381 12:22:32 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:27.381 12:22:32 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:27.381 ************************************ 00:02:27.381 END TEST asan 00:02:27.381 ************************************ 00:02:27.381 12:22:32 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:27.381 12:22:32 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:27.381 12:22:32 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:27.381 12:22:32 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:27.381 12:22:32 -- common/autotest_common.sh@10 -- $ set +x 00:02:27.381 ************************************ 00:02:27.381 START TEST ubsan 00:02:27.381 ************************************ 00:02:27.381 using ubsan 00:02:27.381 12:22:32 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:27.381 00:02:27.381 real 0m0.000s 00:02:27.381 user 0m0.000s 00:02:27.381 sys 0m0.000s 00:02:27.381 12:22:32 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:27.381 12:22:32 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:27.381 ************************************ 00:02:27.381 END TEST ubsan 00:02:27.381 ************************************ 00:02:27.381 12:22:32 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:27.381 12:22:32 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:27.381 12:22:32 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:27.381 12:22:32 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:27.381 12:22:32 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:27.381 12:22:32 -- common/autotest_common.sh@10 -- $ set +x 00:02:27.381 ************************************ 00:02:27.381 START TEST build_native_dpdk 00:02:27.381 ************************************ 00:02:27.381 12:22:32 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:27.381 12:22:32 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:27.381 12:22:32 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:27.381 12:22:32 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:27.381 12:22:32 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:27.381 12:22:32 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:27.381 12:22:32 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:27.381 12:22:32 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:27.381 12:22:32 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:27.381 12:22:32 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:27.381 12:22:32 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:27.381 12:22:32 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:27.381 12:22:32 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:27.381 12:22:32 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:27.381 12:22:32 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:27.381 12:22:32 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:27.381 12:22:32 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:27.642 12:22:32 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:27.642 12:22:32 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:27.642 12:22:32 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:27.642 12:22:32 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:27.642 eeb0605f11 version: 23.11.0 00:02:27.642 238778122a doc: update release notes for 23.11 00:02:27.642 46aa6b3cfc doc: fix description of RSS features 00:02:27.642 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:27.642 7e421ae345 devtools: support skipping forbid rule check 00:02:27.642 12:22:32 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:27.642 12:22:32 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:27.642 12:22:32 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:27.642 12:22:32 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:27.642 12:22:32 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:27.642 12:22:32 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:27.642 12:22:32 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:27.642 12:22:32 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:27.642 12:22:32 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:27.642 12:22:32 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:27.642 12:22:32 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:27.642 12:22:32 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:27.642 12:22:32 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:27.642 12:22:32 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:27.642 12:22:32 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:27.642 12:22:32 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:27.642 12:22:32 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:27.642 12:22:32 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:27.642 12:22:32 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:27.642 12:22:32 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:27.642 12:22:32 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:27.642 12:22:32 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:27.642 12:22:32 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:27.642 12:22:32 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:27.642 12:22:32 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:27.642 12:22:32 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:27.642 12:22:32 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:27.642 12:22:32 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:27.642 12:22:32 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:27.642 12:22:32 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:27.642 12:22:32 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:27.642 12:22:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:27.643 12:22:32 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:27.643 patching file config/rte_config.h 00:02:27.643 Hunk #1 succeeded at 60 (offset 1 line). 00:02:27.643 12:22:32 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:27.643 12:22:32 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:27.643 patching file lib/pcapng/rte_pcapng.c 00:02:27.643 12:22:32 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:27.643 12:22:32 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:27.643 12:22:32 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:27.643 12:22:32 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:27.643 12:22:32 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:27.643 12:22:32 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:27.643 12:22:32 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:34.219 The Meson build system 00:02:34.219 Version: 1.5.0 00:02:34.219 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:34.219 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:34.219 Build type: native build 00:02:34.219 Program cat found: YES (/usr/bin/cat) 00:02:34.219 Project name: DPDK 00:02:34.219 Project version: 23.11.0 00:02:34.219 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:34.219 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:34.219 Host machine cpu family: x86_64 00:02:34.219 Host machine cpu: x86_64 00:02:34.219 Message: ## Building in Developer Mode ## 00:02:34.219 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:34.219 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:34.219 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:34.219 Program python3 found: YES (/usr/bin/python3) 00:02:34.219 Program cat found: YES (/usr/bin/cat) 00:02:34.219 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:34.219 Compiler for C supports arguments -march=native: YES 00:02:34.219 Checking for size of "void *" : 8 00:02:34.219 Checking for size of "void *" : 8 (cached) 00:02:34.219 Library m found: YES 00:02:34.219 Library numa found: YES 00:02:34.219 Has header "numaif.h" : YES 00:02:34.219 Library fdt found: NO 00:02:34.219 Library execinfo found: NO 00:02:34.219 Has header "execinfo.h" : YES 00:02:34.220 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:34.220 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:34.220 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:34.220 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:34.220 Run-time dependency openssl found: YES 3.1.1 00:02:34.220 Run-time dependency libpcap found: YES 1.10.4 00:02:34.220 Has header "pcap.h" with dependency libpcap: YES 00:02:34.220 Compiler for C supports arguments -Wcast-qual: YES 00:02:34.220 Compiler for C supports arguments -Wdeprecated: YES 00:02:34.220 Compiler for C supports arguments -Wformat: YES 00:02:34.220 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:34.220 Compiler for C supports arguments -Wformat-security: NO 00:02:34.220 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:34.220 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:34.220 Compiler for C supports arguments -Wnested-externs: YES 00:02:34.220 Compiler for C supports arguments -Wold-style-definition: YES 00:02:34.220 Compiler for C supports arguments -Wpointer-arith: YES 00:02:34.220 Compiler for C supports arguments -Wsign-compare: YES 00:02:34.220 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:34.220 Compiler for C supports arguments -Wundef: YES 00:02:34.220 Compiler for C supports arguments -Wwrite-strings: YES 00:02:34.220 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:34.220 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:34.220 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:34.220 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:34.220 Program objdump found: YES (/usr/bin/objdump) 00:02:34.220 Compiler for C supports arguments -mavx512f: YES 00:02:34.220 Checking if "AVX512 checking" compiles: YES 00:02:34.220 Fetching value of define "__SSE4_2__" : 1 00:02:34.220 Fetching value of define "__AES__" : 1 00:02:34.220 Fetching value of define "__AVX__" : 1 00:02:34.220 Fetching value of define "__AVX2__" : 1 00:02:34.220 Fetching value of define "__AVX512BW__" : 1 00:02:34.220 Fetching value of define "__AVX512CD__" : 1 00:02:34.220 Fetching value of define "__AVX512DQ__" : 1 00:02:34.220 Fetching value of define "__AVX512F__" : 1 00:02:34.220 Fetching value of define "__AVX512VL__" : 1 00:02:34.220 Fetching value of define "__PCLMUL__" : 1 00:02:34.220 Fetching value of define "__RDRND__" : 1 00:02:34.220 Fetching value of define "__RDSEED__" : 1 00:02:34.220 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:34.220 Fetching value of define "__znver1__" : (undefined) 00:02:34.220 Fetching value of define "__znver2__" : (undefined) 00:02:34.220 Fetching value of define "__znver3__" : (undefined) 00:02:34.220 Fetching value of define "__znver4__" : (undefined) 00:02:34.220 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:34.220 Message: lib/log: Defining dependency "log" 00:02:34.220 Message: lib/kvargs: Defining dependency "kvargs" 00:02:34.220 Message: lib/telemetry: Defining dependency "telemetry" 00:02:34.220 Checking for function "getentropy" : NO 00:02:34.220 Message: lib/eal: Defining dependency "eal" 00:02:34.220 Message: lib/ring: Defining dependency "ring" 00:02:34.220 Message: lib/rcu: Defining dependency "rcu" 00:02:34.220 Message: lib/mempool: Defining dependency "mempool" 00:02:34.220 Message: lib/mbuf: Defining dependency "mbuf" 00:02:34.220 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:34.220 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:34.220 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:34.220 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:34.220 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:34.220 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:34.220 Compiler for C supports arguments -mpclmul: YES 00:02:34.220 Compiler for C supports arguments -maes: YES 00:02:34.220 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:34.220 Compiler for C supports arguments -mavx512bw: YES 00:02:34.220 Compiler for C supports arguments -mavx512dq: YES 00:02:34.220 Compiler for C supports arguments -mavx512vl: YES 00:02:34.220 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:34.220 Compiler for C supports arguments -mavx2: YES 00:02:34.220 Compiler for C supports arguments -mavx: YES 00:02:34.220 Message: lib/net: Defining dependency "net" 00:02:34.220 Message: lib/meter: Defining dependency "meter" 00:02:34.220 Message: lib/ethdev: Defining dependency "ethdev" 00:02:34.220 Message: lib/pci: Defining dependency "pci" 00:02:34.220 Message: lib/cmdline: Defining dependency "cmdline" 00:02:34.220 Message: lib/metrics: Defining dependency "metrics" 00:02:34.220 Message: lib/hash: Defining dependency "hash" 00:02:34.220 Message: lib/timer: Defining dependency "timer" 00:02:34.220 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:34.220 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:34.220 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:34.220 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:34.220 Message: lib/acl: Defining dependency "acl" 00:02:34.220 Message: lib/bbdev: Defining dependency "bbdev" 00:02:34.220 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:34.220 Run-time dependency libelf found: YES 0.191 00:02:34.220 Message: lib/bpf: Defining dependency "bpf" 00:02:34.220 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:34.220 Message: lib/compressdev: Defining dependency "compressdev" 00:02:34.220 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:34.220 Message: lib/distributor: Defining dependency "distributor" 00:02:34.220 Message: lib/dmadev: Defining dependency "dmadev" 00:02:34.220 Message: lib/efd: Defining dependency "efd" 00:02:34.220 Message: lib/eventdev: Defining dependency "eventdev" 00:02:34.220 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:34.220 Message: lib/gpudev: Defining dependency "gpudev" 00:02:34.220 Message: lib/gro: Defining dependency "gro" 00:02:34.220 Message: lib/gso: Defining dependency "gso" 00:02:34.220 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:34.220 Message: lib/jobstats: Defining dependency "jobstats" 00:02:34.220 Message: lib/latencystats: Defining dependency "latencystats" 00:02:34.220 Message: lib/lpm: Defining dependency "lpm" 00:02:34.220 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:34.220 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:34.220 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:34.220 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:34.220 Message: lib/member: Defining dependency "member" 00:02:34.220 Message: lib/pcapng: Defining dependency "pcapng" 00:02:34.220 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:34.220 Message: lib/power: Defining dependency "power" 00:02:34.220 Message: lib/rawdev: Defining dependency "rawdev" 00:02:34.220 Message: lib/regexdev: Defining dependency "regexdev" 00:02:34.220 Message: lib/mldev: Defining dependency "mldev" 00:02:34.220 Message: lib/rib: Defining dependency "rib" 00:02:34.220 Message: lib/reorder: Defining dependency "reorder" 00:02:34.220 Message: lib/sched: Defining dependency "sched" 00:02:34.220 Message: lib/security: Defining dependency "security" 00:02:34.220 Message: lib/stack: Defining dependency "stack" 00:02:34.220 Has header "linux/userfaultfd.h" : YES 00:02:34.220 Has header "linux/vduse.h" : YES 00:02:34.220 Message: lib/vhost: Defining dependency "vhost" 00:02:34.220 Message: lib/ipsec: Defining dependency "ipsec" 00:02:34.220 Message: lib/pdcp: Defining dependency "pdcp" 00:02:34.220 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:34.220 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:34.220 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:34.220 Message: lib/fib: Defining dependency "fib" 00:02:34.220 Message: lib/port: Defining dependency "port" 00:02:34.220 Message: lib/pdump: Defining dependency "pdump" 00:02:34.220 Message: lib/table: Defining dependency "table" 00:02:34.220 Message: lib/pipeline: Defining dependency "pipeline" 00:02:34.220 Message: lib/graph: Defining dependency "graph" 00:02:34.220 Message: lib/node: Defining dependency "node" 00:02:34.220 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:34.220 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:34.220 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:35.161 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:35.161 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:35.161 Compiler for C supports arguments -Wno-unused-value: YES 00:02:35.161 Compiler for C supports arguments -Wno-format: YES 00:02:35.161 Compiler for C supports arguments -Wno-format-security: YES 00:02:35.161 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:35.161 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:35.161 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:35.161 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:35.161 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:35.161 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:35.161 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:35.161 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:35.161 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:35.161 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:35.161 Has header "sys/epoll.h" : YES 00:02:35.161 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:35.161 Configuring doxy-api-html.conf using configuration 00:02:35.161 Configuring doxy-api-man.conf using configuration 00:02:35.161 Program mandb found: YES (/usr/bin/mandb) 00:02:35.161 Program sphinx-build found: NO 00:02:35.161 Configuring rte_build_config.h using configuration 00:02:35.161 Message: 00:02:35.161 ================= 00:02:35.161 Applications Enabled 00:02:35.161 ================= 00:02:35.161 00:02:35.161 apps: 00:02:35.161 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:35.161 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:35.161 test-pmd, test-regex, test-sad, test-security-perf, 00:02:35.161 00:02:35.161 Message: 00:02:35.161 ================= 00:02:35.161 Libraries Enabled 00:02:35.161 ================= 00:02:35.161 00:02:35.161 libs: 00:02:35.161 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:35.161 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:35.161 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:35.161 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:35.161 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:35.161 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:35.161 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:35.161 00:02:35.161 00:02:35.161 Message: 00:02:35.161 =============== 00:02:35.161 Drivers Enabled 00:02:35.161 =============== 00:02:35.161 00:02:35.161 common: 00:02:35.161 00:02:35.161 bus: 00:02:35.161 pci, vdev, 00:02:35.161 mempool: 00:02:35.161 ring, 00:02:35.161 dma: 00:02:35.161 00:02:35.161 net: 00:02:35.161 i40e, 00:02:35.161 raw: 00:02:35.161 00:02:35.161 crypto: 00:02:35.161 00:02:35.161 compress: 00:02:35.161 00:02:35.161 regex: 00:02:35.161 00:02:35.161 ml: 00:02:35.161 00:02:35.161 vdpa: 00:02:35.161 00:02:35.161 event: 00:02:35.161 00:02:35.161 baseband: 00:02:35.161 00:02:35.161 gpu: 00:02:35.161 00:02:35.161 00:02:35.161 Message: 00:02:35.161 ================= 00:02:35.161 Content Skipped 00:02:35.161 ================= 00:02:35.161 00:02:35.161 apps: 00:02:35.161 00:02:35.161 libs: 00:02:35.161 00:02:35.161 drivers: 00:02:35.161 common/cpt: not in enabled drivers build config 00:02:35.161 common/dpaax: not in enabled drivers build config 00:02:35.161 common/iavf: not in enabled drivers build config 00:02:35.161 common/idpf: not in enabled drivers build config 00:02:35.161 common/mvep: not in enabled drivers build config 00:02:35.161 common/octeontx: not in enabled drivers build config 00:02:35.161 bus/auxiliary: not in enabled drivers build config 00:02:35.161 bus/cdx: not in enabled drivers build config 00:02:35.161 bus/dpaa: not in enabled drivers build config 00:02:35.161 bus/fslmc: not in enabled drivers build config 00:02:35.161 bus/ifpga: not in enabled drivers build config 00:02:35.161 bus/platform: not in enabled drivers build config 00:02:35.161 bus/vmbus: not in enabled drivers build config 00:02:35.161 common/cnxk: not in enabled drivers build config 00:02:35.161 common/mlx5: not in enabled drivers build config 00:02:35.161 common/nfp: not in enabled drivers build config 00:02:35.161 common/qat: not in enabled drivers build config 00:02:35.161 common/sfc_efx: not in enabled drivers build config 00:02:35.161 mempool/bucket: not in enabled drivers build config 00:02:35.161 mempool/cnxk: not in enabled drivers build config 00:02:35.161 mempool/dpaa: not in enabled drivers build config 00:02:35.161 mempool/dpaa2: not in enabled drivers build config 00:02:35.161 mempool/octeontx: not in enabled drivers build config 00:02:35.161 mempool/stack: not in enabled drivers build config 00:02:35.161 dma/cnxk: not in enabled drivers build config 00:02:35.161 dma/dpaa: not in enabled drivers build config 00:02:35.161 dma/dpaa2: not in enabled drivers build config 00:02:35.161 dma/hisilicon: not in enabled drivers build config 00:02:35.161 dma/idxd: not in enabled drivers build config 00:02:35.161 dma/ioat: not in enabled drivers build config 00:02:35.161 dma/skeleton: not in enabled drivers build config 00:02:35.161 net/af_packet: not in enabled drivers build config 00:02:35.161 net/af_xdp: not in enabled drivers build config 00:02:35.161 net/ark: not in enabled drivers build config 00:02:35.161 net/atlantic: not in enabled drivers build config 00:02:35.161 net/avp: not in enabled drivers build config 00:02:35.161 net/axgbe: not in enabled drivers build config 00:02:35.162 net/bnx2x: not in enabled drivers build config 00:02:35.162 net/bnxt: not in enabled drivers build config 00:02:35.162 net/bonding: not in enabled drivers build config 00:02:35.162 net/cnxk: not in enabled drivers build config 00:02:35.162 net/cpfl: not in enabled drivers build config 00:02:35.162 net/cxgbe: not in enabled drivers build config 00:02:35.162 net/dpaa: not in enabled drivers build config 00:02:35.162 net/dpaa2: not in enabled drivers build config 00:02:35.162 net/e1000: not in enabled drivers build config 00:02:35.162 net/ena: not in enabled drivers build config 00:02:35.162 net/enetc: not in enabled drivers build config 00:02:35.162 net/enetfec: not in enabled drivers build config 00:02:35.162 net/enic: not in enabled drivers build config 00:02:35.162 net/failsafe: not in enabled drivers build config 00:02:35.162 net/fm10k: not in enabled drivers build config 00:02:35.162 net/gve: not in enabled drivers build config 00:02:35.162 net/hinic: not in enabled drivers build config 00:02:35.162 net/hns3: not in enabled drivers build config 00:02:35.162 net/iavf: not in enabled drivers build config 00:02:35.162 net/ice: not in enabled drivers build config 00:02:35.162 net/idpf: not in enabled drivers build config 00:02:35.162 net/igc: not in enabled drivers build config 00:02:35.162 net/ionic: not in enabled drivers build config 00:02:35.162 net/ipn3ke: not in enabled drivers build config 00:02:35.162 net/ixgbe: not in enabled drivers build config 00:02:35.162 net/mana: not in enabled drivers build config 00:02:35.162 net/memif: not in enabled drivers build config 00:02:35.162 net/mlx4: not in enabled drivers build config 00:02:35.162 net/mlx5: not in enabled drivers build config 00:02:35.162 net/mvneta: not in enabled drivers build config 00:02:35.162 net/mvpp2: not in enabled drivers build config 00:02:35.162 net/netvsc: not in enabled drivers build config 00:02:35.162 net/nfb: not in enabled drivers build config 00:02:35.162 net/nfp: not in enabled drivers build config 00:02:35.162 net/ngbe: not in enabled drivers build config 00:02:35.162 net/null: not in enabled drivers build config 00:02:35.162 net/octeontx: not in enabled drivers build config 00:02:35.162 net/octeon_ep: not in enabled drivers build config 00:02:35.162 net/pcap: not in enabled drivers build config 00:02:35.162 net/pfe: not in enabled drivers build config 00:02:35.162 net/qede: not in enabled drivers build config 00:02:35.162 net/ring: not in enabled drivers build config 00:02:35.162 net/sfc: not in enabled drivers build config 00:02:35.162 net/softnic: not in enabled drivers build config 00:02:35.162 net/tap: not in enabled drivers build config 00:02:35.162 net/thunderx: not in enabled drivers build config 00:02:35.162 net/txgbe: not in enabled drivers build config 00:02:35.162 net/vdev_netvsc: not in enabled drivers build config 00:02:35.162 net/vhost: not in enabled drivers build config 00:02:35.162 net/virtio: not in enabled drivers build config 00:02:35.162 net/vmxnet3: not in enabled drivers build config 00:02:35.162 raw/cnxk_bphy: not in enabled drivers build config 00:02:35.162 raw/cnxk_gpio: not in enabled drivers build config 00:02:35.162 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:35.162 raw/ifpga: not in enabled drivers build config 00:02:35.162 raw/ntb: not in enabled drivers build config 00:02:35.162 raw/skeleton: not in enabled drivers build config 00:02:35.162 crypto/armv8: not in enabled drivers build config 00:02:35.162 crypto/bcmfs: not in enabled drivers build config 00:02:35.162 crypto/caam_jr: not in enabled drivers build config 00:02:35.162 crypto/ccp: not in enabled drivers build config 00:02:35.162 crypto/cnxk: not in enabled drivers build config 00:02:35.162 crypto/dpaa_sec: not in enabled drivers build config 00:02:35.162 crypto/dpaa2_sec: not in enabled drivers build config 00:02:35.162 crypto/ipsec_mb: not in enabled drivers build config 00:02:35.162 crypto/mlx5: not in enabled drivers build config 00:02:35.162 crypto/mvsam: not in enabled drivers build config 00:02:35.162 crypto/nitrox: not in enabled drivers build config 00:02:35.162 crypto/null: not in enabled drivers build config 00:02:35.162 crypto/octeontx: not in enabled drivers build config 00:02:35.162 crypto/openssl: not in enabled drivers build config 00:02:35.162 crypto/scheduler: not in enabled drivers build config 00:02:35.162 crypto/uadk: not in enabled drivers build config 00:02:35.162 crypto/virtio: not in enabled drivers build config 00:02:35.162 compress/isal: not in enabled drivers build config 00:02:35.162 compress/mlx5: not in enabled drivers build config 00:02:35.162 compress/octeontx: not in enabled drivers build config 00:02:35.162 compress/zlib: not in enabled drivers build config 00:02:35.162 regex/mlx5: not in enabled drivers build config 00:02:35.162 regex/cn9k: not in enabled drivers build config 00:02:35.162 ml/cnxk: not in enabled drivers build config 00:02:35.162 vdpa/ifc: not in enabled drivers build config 00:02:35.162 vdpa/mlx5: not in enabled drivers build config 00:02:35.162 vdpa/nfp: not in enabled drivers build config 00:02:35.162 vdpa/sfc: not in enabled drivers build config 00:02:35.162 event/cnxk: not in enabled drivers build config 00:02:35.162 event/dlb2: not in enabled drivers build config 00:02:35.162 event/dpaa: not in enabled drivers build config 00:02:35.162 event/dpaa2: not in enabled drivers build config 00:02:35.162 event/dsw: not in enabled drivers build config 00:02:35.162 event/opdl: not in enabled drivers build config 00:02:35.162 event/skeleton: not in enabled drivers build config 00:02:35.162 event/sw: not in enabled drivers build config 00:02:35.162 event/octeontx: not in enabled drivers build config 00:02:35.162 baseband/acc: not in enabled drivers build config 00:02:35.162 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:35.162 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:35.162 baseband/la12xx: not in enabled drivers build config 00:02:35.162 baseband/null: not in enabled drivers build config 00:02:35.162 baseband/turbo_sw: not in enabled drivers build config 00:02:35.162 gpu/cuda: not in enabled drivers build config 00:02:35.162 00:02:35.162 00:02:35.162 Build targets in project: 217 00:02:35.162 00:02:35.162 DPDK 23.11.0 00:02:35.162 00:02:35.162 User defined options 00:02:35.162 libdir : lib 00:02:35.162 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:35.162 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:35.162 c_link_args : 00:02:35.162 enable_docs : false 00:02:35.162 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:35.162 enable_kmods : false 00:02:35.162 machine : native 00:02:35.162 tests : false 00:02:35.162 00:02:35.162 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:35.162 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:35.162 12:22:40 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:35.421 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:35.421 [1/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:35.421 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:35.421 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:35.421 [4/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:35.421 [5/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:35.421 [6/707] Linking static target lib/librte_kvargs.a 00:02:35.421 [7/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:35.681 [8/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:35.681 [9/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:35.681 [10/707] Linking static target lib/librte_log.a 00:02:35.681 [11/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.681 [12/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:35.681 [13/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:35.681 [14/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:35.941 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:35.941 [16/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:35.941 [17/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.941 [18/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:35.941 [19/707] Linking target lib/librte_log.so.24.0 00:02:36.199 [20/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:36.199 [21/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:36.199 [22/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:36.199 [23/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:36.199 [24/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:36.199 [25/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:36.459 [26/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:36.459 [27/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:36.459 [28/707] Linking static target lib/librte_telemetry.a 00:02:36.459 [29/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:36.459 [30/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:36.459 [31/707] Linking target lib/librte_kvargs.so.24.0 00:02:36.459 [32/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:36.459 [33/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:36.719 [34/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:36.719 [35/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:36.719 [36/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:36.719 [37/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:36.719 [38/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:36.719 [39/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:36.719 [40/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:36.719 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:36.719 [42/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.719 [43/707] Linking target lib/librte_telemetry.so.24.0 00:02:36.978 [44/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:36.978 [45/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:36.978 [46/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:36.978 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:37.240 [48/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:37.240 [49/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:37.240 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:37.240 [51/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:37.240 [52/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:37.240 [53/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:37.240 [54/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:37.499 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:37.499 [56/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:37.499 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:37.499 [58/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:37.499 [59/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:37.499 [60/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:37.499 [61/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:37.499 [62/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:37.499 [63/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:37.499 [64/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:37.499 [65/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:37.759 [66/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:37.759 [67/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:37.759 [68/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:37.759 [69/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:37.759 [70/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:38.018 [71/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:38.018 [72/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:38.018 [73/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:38.018 [74/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:38.018 [75/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:38.018 [76/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:38.018 [77/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:38.018 [78/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:38.277 [79/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:38.277 [80/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:38.277 [81/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:38.277 [82/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:38.277 [83/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:38.277 [84/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:38.277 [85/707] Linking static target lib/librte_ring.a 00:02:38.537 [86/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:38.537 [87/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:38.537 [88/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.537 [89/707] Linking static target lib/librte_eal.a 00:02:38.537 [90/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:38.537 [91/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:38.797 [92/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:38.797 [93/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:38.797 [94/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:38.797 [95/707] Linking static target lib/librte_mempool.a 00:02:39.057 [96/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:39.057 [97/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:39.057 [98/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:39.057 [99/707] Linking static target lib/librte_rcu.a 00:02:39.057 [100/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:39.057 [101/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:39.057 [102/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:39.057 [103/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:39.057 [104/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:39.318 [105/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.318 [106/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.318 [107/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:39.318 [108/707] Linking static target lib/librte_net.a 00:02:39.318 [109/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:39.318 [110/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:39.318 [111/707] Linking static target lib/librte_mbuf.a 00:02:39.318 [112/707] Linking static target lib/librte_meter.a 00:02:39.576 [113/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:39.576 [114/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.576 [115/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:39.576 [116/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:39.576 [117/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.835 [118/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:39.835 [119/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.093 [120/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:40.093 [121/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:40.351 [122/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:40.352 [123/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:40.352 [124/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:40.611 [125/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:40.611 [126/707] Linking static target lib/librte_pci.a 00:02:40.611 [127/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:40.611 [128/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:40.611 [129/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:40.611 [130/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:40.611 [131/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:40.611 [132/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:40.611 [133/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.611 [134/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:40.870 [135/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:40.870 [136/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:40.870 [137/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:40.870 [138/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:40.870 [139/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:40.870 [140/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:40.870 [141/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:40.870 [142/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:40.870 [143/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:40.870 [144/707] Linking static target lib/librte_cmdline.a 00:02:41.129 [145/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:41.129 [146/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:41.388 [147/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:41.388 [148/707] Linking static target lib/librte_metrics.a 00:02:41.388 [149/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:41.646 [150/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:41.646 [151/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.646 [152/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.646 [153/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:41.646 [154/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:41.646 [155/707] Linking static target lib/librte_timer.a 00:02:42.213 [156/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.213 [157/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:42.213 [158/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:42.213 [159/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:42.213 [160/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:42.780 [161/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:42.780 [162/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:42.780 [163/707] Linking static target lib/librte_bitratestats.a 00:02:43.038 [164/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:43.038 [165/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.038 [166/707] Linking static target lib/librte_bbdev.a 00:02:43.038 [167/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:43.038 [168/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:43.296 [169/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:43.554 [170/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.554 [171/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:43.554 [172/707] Linking static target lib/librte_hash.a 00:02:43.554 [173/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:43.554 [174/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:43.812 [175/707] Linking static target lib/librte_ethdev.a 00:02:43.812 [176/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:43.812 [177/707] Linking static target lib/acl/libavx2_tmp.a 00:02:43.812 [178/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:43.812 [179/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:43.812 [180/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.812 [181/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:44.078 [182/707] Linking target lib/librte_eal.so.24.0 00:02:44.078 [183/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.078 [184/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:44.078 [185/707] Linking target lib/librte_ring.so.24.0 00:02:44.078 [186/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:44.078 [187/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:44.078 [188/707] Linking target lib/librte_meter.so.24.0 00:02:44.078 [189/707] Linking target lib/librte_pci.so.24.0 00:02:44.345 [190/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:44.345 [191/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:44.345 [192/707] Linking target lib/librte_rcu.so.24.0 00:02:44.345 [193/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:44.345 [194/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:44.345 [195/707] Linking static target lib/librte_cfgfile.a 00:02:44.345 [196/707] Linking target lib/librte_mempool.so.24.0 00:02:44.345 [197/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:44.345 [198/707] Linking target lib/librte_timer.so.24.0 00:02:44.346 [199/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:44.346 [200/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:44.346 [201/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:44.346 [202/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:44.346 [203/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:44.346 [204/707] Linking target lib/librte_mbuf.so.24.0 00:02:44.606 [205/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:44.606 [206/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:44.606 [207/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.606 [208/707] Linking static target lib/librte_bpf.a 00:02:44.606 [209/707] Linking target lib/librte_bbdev.so.24.0 00:02:44.606 [210/707] Linking target lib/librte_net.so.24.0 00:02:44.606 [211/707] Linking target lib/librte_cfgfile.so.24.0 00:02:44.865 [212/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:44.865 [213/707] Linking target lib/librte_cmdline.so.24.0 00:02:44.865 [214/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:44.865 [215/707] Linking target lib/librte_hash.so.24.0 00:02:44.865 [216/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:44.865 [217/707] Linking static target lib/librte_compressdev.a 00:02:44.865 [218/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.865 [219/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:44.865 [220/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:44.865 [221/707] Linking static target lib/librte_acl.a 00:02:45.123 [222/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:45.123 [223/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:45.123 [224/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:45.123 [225/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:45.123 [226/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.123 [227/707] Linking static target lib/librte_distributor.a 00:02:45.381 [228/707] Linking target lib/librte_acl.so.24.0 00:02:45.381 [229/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.381 [230/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:45.381 [231/707] Linking target lib/librte_compressdev.so.24.0 00:02:45.381 [232/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:45.381 [233/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.640 [234/707] Linking target lib/librte_distributor.so.24.0 00:02:45.640 [235/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:45.640 [236/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:45.640 [237/707] Linking static target lib/librte_dmadev.a 00:02:45.898 [238/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:45.898 [239/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.898 [240/707] Linking target lib/librte_dmadev.so.24.0 00:02:45.898 [241/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:46.157 [242/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:46.157 [243/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:46.157 [244/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:46.157 [245/707] Linking static target lib/librte_efd.a 00:02:46.415 [246/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.415 [247/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:46.415 [248/707] Linking target lib/librte_efd.so.24.0 00:02:46.415 [249/707] Linking static target lib/librte_cryptodev.a 00:02:46.415 [250/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:46.674 [251/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:46.674 [252/707] Linking static target lib/librte_dispatcher.a 00:02:46.932 [253/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:46.932 [254/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:46.932 [255/707] Linking static target lib/librte_gpudev.a 00:02:46.932 [256/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.932 [257/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:46.932 [258/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:46.932 [259/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:47.532 [260/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:47.532 [261/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:47.532 [262/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:47.532 [263/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.532 [264/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.532 [265/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:47.532 [266/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:47.532 [267/707] Linking target lib/librte_gpudev.so.24.0 00:02:47.532 [268/707] Linking static target lib/librte_gro.a 00:02:47.532 [269/707] Linking target lib/librte_cryptodev.so.24.0 00:02:47.791 [270/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:47.791 [271/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:47.791 [272/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:47.791 [273/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.791 [274/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:47.791 [275/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.791 [276/707] Linking target lib/librte_ethdev.so.24.0 00:02:47.791 [277/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:47.791 [278/707] Linking static target lib/librte_eventdev.a 00:02:48.050 [279/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:48.050 [280/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:48.050 [281/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:48.050 [282/707] Linking target lib/librte_metrics.so.24.0 00:02:48.050 [283/707] Linking target lib/librte_gro.so.24.0 00:02:48.050 [284/707] Linking target lib/librte_bpf.so.24.0 00:02:48.050 [285/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:48.050 [286/707] Linking static target lib/librte_gso.a 00:02:48.050 [287/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:48.307 [288/707] Linking target lib/librte_bitratestats.so.24.0 00:02:48.307 [289/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:48.307 [290/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:48.307 [291/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.307 [292/707] Linking target lib/librte_gso.so.24.0 00:02:48.307 [293/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:48.307 [294/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:48.566 [295/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:48.566 [296/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:48.566 [297/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:48.566 [298/707] Linking static target lib/librte_jobstats.a 00:02:48.566 [299/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:48.824 [300/707] Linking static target lib/librte_ip_frag.a 00:02:48.824 [301/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:48.824 [302/707] Linking static target lib/librte_latencystats.a 00:02:48.824 [303/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:48.824 [304/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:48.824 [305/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.824 [306/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:48.824 [307/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:48.824 [308/707] Linking target lib/librte_jobstats.so.24.0 00:02:48.824 [309/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.083 [310/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.083 [311/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:49.083 [312/707] Linking target lib/librte_ip_frag.so.24.0 00:02:49.083 [313/707] Linking target lib/librte_latencystats.so.24.0 00:02:49.083 [314/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:49.083 [315/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:49.083 [316/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:49.342 [317/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:49.342 [318/707] Linking static target lib/librte_lpm.a 00:02:49.342 [319/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:49.342 [320/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:49.602 [321/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:49.602 [322/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.602 [323/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:49.602 [324/707] Linking static target lib/librte_pcapng.a 00:02:49.602 [325/707] Linking target lib/librte_lpm.so.24.0 00:02:49.602 [326/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:49.602 [327/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:49.602 [328/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.602 [329/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:49.602 [330/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:49.862 [331/707] Linking target lib/librte_eventdev.so.24.0 00:02:49.862 [332/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.862 [333/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:49.862 [334/707] Linking target lib/librte_pcapng.so.24.0 00:02:49.862 [335/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:49.862 [336/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:49.862 [337/707] Linking target lib/librte_dispatcher.so.24.0 00:02:49.862 [338/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:50.122 [339/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:50.122 [340/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:50.122 [341/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:50.122 [342/707] Linking static target lib/librte_power.a 00:02:50.122 [343/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:50.122 [344/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:50.122 [345/707] Linking static target lib/librte_regexdev.a 00:02:50.381 [346/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:50.381 [347/707] Linking static target lib/librte_rawdev.a 00:02:50.381 [348/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:50.381 [349/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:50.381 [350/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:50.381 [351/707] Linking static target lib/librte_member.a 00:02:50.381 [352/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:50.381 [353/707] Linking static target lib/librte_mldev.a 00:02:50.640 [354/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.640 [355/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.640 [356/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:50.640 [357/707] Linking target lib/librte_member.so.24.0 00:02:50.640 [358/707] Linking target lib/librte_rawdev.so.24.0 00:02:50.640 [359/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:50.640 [360/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.640 [361/707] Linking target lib/librte_power.so.24.0 00:02:50.640 [362/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:50.900 [363/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:50.900 [364/707] Linking static target lib/librte_reorder.a 00:02:50.900 [365/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.900 [366/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:50.900 [367/707] Linking target lib/librte_regexdev.so.24.0 00:02:50.900 [368/707] Linking static target lib/librte_rib.a 00:02:50.900 [369/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:50.900 [370/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:51.159 [371/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:51.159 [372/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:51.159 [373/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:51.159 [374/707] Linking static target lib/librte_stack.a 00:02:51.159 [375/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.159 [376/707] Linking target lib/librte_reorder.so.24.0 00:02:51.159 [377/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.419 [378/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.419 [379/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:51.419 [380/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:51.419 [381/707] Linking static target lib/librte_security.a 00:02:51.419 [382/707] Linking target lib/librte_rib.so.24.0 00:02:51.419 [383/707] Linking target lib/librte_stack.so.24.0 00:02:51.419 [384/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:51.419 [385/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.419 [386/707] Linking target lib/librte_mldev.so.24.0 00:02:51.419 [387/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:51.419 [388/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:51.677 [389/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.677 [390/707] Linking target lib/librte_security.so.24.0 00:02:51.677 [391/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:51.677 [392/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:51.936 [393/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:51.936 [394/707] Linking static target lib/librte_sched.a 00:02:51.936 [395/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:52.193 [396/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:52.193 [397/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.193 [398/707] Linking target lib/librte_sched.so.24.0 00:02:52.193 [399/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:52.453 [400/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:52.453 [401/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:52.453 [402/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:52.711 [403/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:52.711 [404/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:52.711 [405/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:52.711 [406/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:52.711 [407/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:52.970 [408/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:52.970 [409/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:53.229 [410/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:53.230 [411/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:53.230 [412/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:53.230 [413/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:53.230 [414/707] Linking static target lib/librte_ipsec.a 00:02:53.230 [415/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:53.489 [416/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.489 [417/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:53.748 [418/707] Linking target lib/librte_ipsec.so.24.0 00:02:53.748 [419/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:53.748 [420/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:53.748 [421/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:54.006 [422/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:54.006 [423/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:54.006 [424/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:54.006 [425/707] Linking static target lib/librte_fib.a 00:02:54.265 [426/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:54.265 [427/707] Linking static target lib/librte_pdcp.a 00:02:54.265 [428/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:54.265 [429/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:54.266 [430/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.266 [431/707] Linking target lib/librte_fib.so.24.0 00:02:54.266 [432/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:54.524 [433/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.524 [434/707] Linking target lib/librte_pdcp.so.24.0 00:02:54.782 [435/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:54.783 [436/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:54.783 [437/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:54.783 [438/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:54.783 [439/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:55.041 [440/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:55.041 [441/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:55.300 [442/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:55.300 [443/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:55.300 [444/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:55.300 [445/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:55.300 [446/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:55.558 [447/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:55.558 [448/707] Linking static target lib/librte_port.a 00:02:55.558 [449/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:55.558 [450/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:55.558 [451/707] Linking static target lib/librte_pdump.a 00:02:55.558 [452/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:55.818 [453/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:55.818 [454/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.818 [455/707] Linking target lib/librte_pdump.so.24.0 00:02:55.818 [456/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.818 [457/707] Linking target lib/librte_port.so.24.0 00:02:56.077 [458/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:56.077 [459/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:56.077 [460/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:56.337 [461/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:56.337 [462/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:56.337 [463/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:56.337 [464/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:56.596 [465/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:56.596 [466/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:56.596 [467/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:56.596 [468/707] Linking static target lib/librte_table.a 00:02:56.854 [469/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:56.854 [470/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:57.113 [471/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:57.113 [472/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.372 [473/707] Linking target lib/librte_table.so.24.0 00:02:57.372 [474/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:57.372 [475/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:57.372 [476/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:57.372 [477/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:57.372 [478/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:57.631 [479/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:57.890 [480/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:57.890 [481/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:57.890 [482/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:57.890 [483/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:58.149 [484/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:58.149 [485/707] Linking static target lib/librte_graph.a 00:02:58.149 [486/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:58.149 [487/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:58.149 [488/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:58.149 [489/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:58.408 [490/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:58.667 [491/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.667 [492/707] Linking target lib/librte_graph.so.24.0 00:02:58.667 [493/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:58.667 [494/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:58.926 [495/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:58.926 [496/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:58.926 [497/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:58.926 [498/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:58.927 [499/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:59.196 [500/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:59.196 [501/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:59.196 [502/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:59.196 [503/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:59.196 [504/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:59.470 [505/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:59.470 [506/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:59.470 [507/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:59.470 [508/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:59.730 [509/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:59.730 [510/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:59.730 [511/707] Linking static target lib/librte_node.a 00:02:59.730 [512/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:59.730 [513/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.990 [514/707] Linking target lib/librte_node.so.24.0 00:02:59.990 [515/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:59.990 [516/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:59.990 [517/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:59.990 [518/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:59.990 [519/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:00.250 [520/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:00.250 [521/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:00.250 [522/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:00.250 [523/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:00.250 [524/707] Linking static target drivers/librte_bus_pci.a 00:03:00.250 [525/707] Linking static target drivers/librte_bus_vdev.a 00:03:00.250 [526/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:00.250 [527/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:00.250 [528/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:00.250 [529/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:00.250 [530/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.250 [531/707] Linking target drivers/librte_bus_vdev.so.24.0 00:03:00.510 [532/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:00.510 [533/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:00.510 [534/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:00.510 [535/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.510 [536/707] Linking target drivers/librte_bus_pci.so.24.0 00:03:00.510 [537/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:00.510 [538/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:00.510 [539/707] Linking static target drivers/librte_mempool_ring.a 00:03:00.510 [540/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:00.770 [541/707] Linking target drivers/librte_mempool_ring.so.24.0 00:03:00.770 [542/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:00.770 [543/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:00.770 [544/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:01.029 [545/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:01.289 [546/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:01.289 [547/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:01.859 [548/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:02.118 [549/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:02.118 [550/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:02.118 [551/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:02.118 [552/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:02.118 [553/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:02.118 [554/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:02.377 [555/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:02.377 [556/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:02.637 [557/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:02.637 [558/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:02.637 [559/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:02.897 [560/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:03.155 [561/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:03.155 [562/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:03.155 [563/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:03.414 [564/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:03.414 [565/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:03.414 [566/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:03.674 [567/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:03.674 [568/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:03.674 [569/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:03.674 [570/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:03.674 [571/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:03.674 [572/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:03.934 [573/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:03.934 [574/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:03.934 [575/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:04.192 [576/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:04.192 [577/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:04.192 [578/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:04.455 [579/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:04.455 [580/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:04.455 [581/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:04.715 [582/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:04.715 [583/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:04.715 [584/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:04.715 [585/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:04.974 [586/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:04.974 [587/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:04.974 [588/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:04.974 [589/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:04.974 [590/707] Linking static target drivers/librte_net_i40e.a 00:03:05.235 [591/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:05.235 [592/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:05.235 [593/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:05.494 [594/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:05.494 [595/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.494 [596/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:05.494 [597/707] Linking target drivers/librte_net_i40e.so.24.0 00:03:05.494 [598/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:05.494 [599/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:05.754 [600/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:06.015 [601/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:06.015 [602/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:06.015 [603/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:06.015 [604/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:06.015 [605/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:06.277 [606/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:06.277 [607/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:06.277 [608/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:06.277 [609/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:06.536 [610/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:06.536 [611/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:06.536 [612/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:06.795 [613/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:06.795 [614/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:06.795 [615/707] Linking static target lib/librte_vhost.a 00:03:06.795 [616/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:06.795 [617/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:07.054 [618/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:07.625 [619/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:07.625 [620/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.625 [621/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:07.625 [622/707] Linking target lib/librte_vhost.so.24.0 00:03:07.884 [623/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:07.884 [624/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:07.884 [625/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:07.884 [626/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:07.884 [627/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:08.144 [628/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:08.144 [629/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:08.144 [630/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:08.144 [631/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:08.144 [632/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:08.404 [633/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:08.404 [634/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:08.404 [635/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:08.404 [636/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:08.404 [637/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:08.663 [638/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:08.663 [639/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:08.663 [640/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:08.663 [641/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:08.923 [642/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:08.923 [643/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:08.923 [644/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:08.923 [645/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:09.182 [646/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:09.182 [647/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:09.182 [648/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:09.182 [649/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:09.442 [650/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:09.442 [651/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:09.702 [652/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:09.702 [653/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:09.702 [654/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:09.702 [655/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:09.962 [656/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:09.962 [657/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:09.962 [658/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:09.962 [659/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:10.222 [660/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:10.481 [661/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:10.481 [662/707] Linking static target lib/librte_pipeline.a 00:03:10.481 [663/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:10.481 [664/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:10.481 [665/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:10.481 [666/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:10.741 [667/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:11.001 [668/707] Linking target app/dpdk-dumpcap 00:03:11.001 [669/707] Linking target app/dpdk-graph 00:03:11.001 [670/707] Linking target app/dpdk-pdump 00:03:11.001 [671/707] Linking target app/dpdk-proc-info 00:03:11.001 [672/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:11.260 [673/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:11.260 [674/707] Linking target app/dpdk-test-bbdev 00:03:11.260 [675/707] Linking target app/dpdk-test-acl 00:03:11.260 [676/707] Linking target app/dpdk-test-cmdline 00:03:11.521 [677/707] Linking target app/dpdk-test-compress-perf 00:03:11.521 [678/707] Linking target app/dpdk-test-dma-perf 00:03:11.521 [679/707] Linking target app/dpdk-test-crypto-perf 00:03:11.521 [680/707] Linking target app/dpdk-test-eventdev 00:03:11.781 [681/707] Linking target app/dpdk-test-gpudev 00:03:11.781 [682/707] Linking target app/dpdk-test-fib 00:03:11.781 [683/707] Linking target app/dpdk-test-flow-perf 00:03:11.781 [684/707] Linking target app/dpdk-test-pipeline 00:03:11.781 [685/707] Linking target app/dpdk-test-mldev 00:03:12.041 [686/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:12.041 [687/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:12.041 [688/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:12.041 [689/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:12.302 [690/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:12.302 [691/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:12.562 [692/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:12.562 [693/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:12.822 [694/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:12.822 [695/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:12.822 [696/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.822 [697/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:12.822 [698/707] Linking target lib/librte_pipeline.so.24.0 00:03:12.822 [699/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:13.081 [700/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:13.081 [701/707] Linking target app/dpdk-test-sad 00:03:13.349 [702/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:13.349 [703/707] Linking target app/dpdk-test-regex 00:03:13.621 [704/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:13.621 [705/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:13.896 [706/707] Linking target app/dpdk-test-security-perf 00:03:13.896 [707/707] Linking target app/dpdk-testpmd 00:03:13.896 12:23:19 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:13.896 12:23:19 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:13.896 12:23:19 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:13.896 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:14.154 [0/1] Installing files. 00:03:14.417 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.417 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.418 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.419 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.420 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:14.421 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:14.422 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:14.422 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.422 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.423 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.685 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.685 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.685 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.685 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:14.685 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.685 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:14.685 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.685 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:14.685 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.685 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:14.685 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.685 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.685 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.685 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.686 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.686 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.686 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.686 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.686 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.686 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.686 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.686 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.686 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.686 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.686 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.686 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.686 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.686 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.686 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.686 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.686 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.687 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.688 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.689 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.690 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.690 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.690 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.690 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.951 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.952 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.952 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.952 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.952 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:14.952 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:14.952 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:14.952 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:14.952 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:14.952 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:14.952 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:14.952 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:14.952 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:14.952 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:14.952 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:14.952 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:14.952 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:14.952 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:14.952 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:14.952 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:14.952 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:14.952 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:14.952 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:14.952 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:14.952 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:14.952 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:14.952 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:14.952 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:14.952 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:14.952 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:14.952 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:14.952 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:14.952 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:14.952 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:14.952 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:14.952 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:14.952 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:14.952 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:14.952 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:14.952 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:14.952 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:14.952 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:14.952 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:14.952 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:14.952 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:14.952 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:14.952 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:14.952 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:14.952 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:14.952 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:14.952 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:14.952 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:14.952 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:14.952 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:14.952 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:14.952 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:14.952 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:14.952 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:14.952 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:14.952 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:14.952 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:14.952 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:14.952 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:14.952 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:14.952 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:14.952 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:14.952 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:14.952 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:14.952 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:14.952 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:14.952 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:14.952 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:14.952 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:14.952 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:14.952 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:14.952 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:14.952 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:14.952 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:14.952 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:14.952 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:14.952 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:14.952 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:14.952 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:14.952 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:14.952 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:14.952 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:14.952 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:14.952 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:14.952 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:14.952 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:14.952 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:14.952 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:14.952 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:14.952 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:14.952 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:14.952 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:14.952 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:14.952 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:14.952 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:14.952 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:14.952 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:14.952 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:14.952 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:14.952 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:14.952 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:14.952 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:14.953 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:14.953 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:14.953 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:14.953 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:14.953 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:14.953 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:14.953 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:14.953 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:14.953 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:14.953 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:14.953 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:14.953 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:14.953 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:14.953 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:14.953 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:14.953 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:14.953 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:14.953 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:14.953 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:14.953 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:14.953 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:14.953 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:14.953 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:14.953 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:14.953 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:14.953 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:14.953 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:14.953 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:14.953 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:14.953 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:14.953 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:14.953 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:14.953 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:14.953 12:23:20 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:14.953 12:23:20 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:14.953 ************************************ 00:03:14.953 END TEST build_native_dpdk 00:03:14.953 ************************************ 00:03:14.953 00:03:14.953 real 0m47.459s 00:03:14.953 user 5m20.828s 00:03:14.953 sys 0m55.283s 00:03:14.953 12:23:20 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:14.953 12:23:20 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:14.953 12:23:20 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:14.953 12:23:20 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:14.953 12:23:20 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:14.953 12:23:20 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:14.953 12:23:20 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:14.953 12:23:20 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:14.953 12:23:20 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:14.953 12:23:20 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:15.212 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:15.212 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:15.212 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:15.473 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:15.732 Using 'verbs' RDMA provider 00:03:32.070 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:50.175 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:50.175 Creating mk/config.mk...done. 00:03:50.175 Creating mk/cc.flags.mk...done. 00:03:50.175 Type 'make' to build. 00:03:50.175 12:23:54 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:50.175 12:23:54 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:50.175 12:23:54 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:50.175 12:23:54 -- common/autotest_common.sh@10 -- $ set +x 00:03:50.175 ************************************ 00:03:50.175 START TEST make 00:03:50.175 ************************************ 00:03:50.175 12:23:54 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:50.175 make[1]: Nothing to be done for 'all'. 00:04:36.899 CC lib/ut/ut.o 00:04:36.899 CC lib/ut_mock/mock.o 00:04:36.899 CC lib/log/log.o 00:04:36.899 CC lib/log/log_deprecated.o 00:04:36.899 CC lib/log/log_flags.o 00:04:36.899 LIB libspdk_ut.a 00:04:36.899 LIB libspdk_ut_mock.a 00:04:36.899 LIB libspdk_log.a 00:04:36.899 SO libspdk_ut_mock.so.6.0 00:04:36.899 SO libspdk_ut.so.2.0 00:04:36.899 SO libspdk_log.so.7.0 00:04:36.899 SYMLINK libspdk_ut_mock.so 00:04:36.899 SYMLINK libspdk_ut.so 00:04:36.899 SYMLINK libspdk_log.so 00:04:36.899 CC lib/ioat/ioat.o 00:04:36.899 CXX lib/trace_parser/trace.o 00:04:36.899 CC lib/dma/dma.o 00:04:36.899 CC lib/util/crc16.o 00:04:36.899 CC lib/util/cpuset.o 00:04:36.899 CC lib/util/bit_array.o 00:04:36.899 CC lib/util/base64.o 00:04:36.899 CC lib/util/crc32.o 00:04:36.899 CC lib/util/crc32c.o 00:04:36.899 CC lib/vfio_user/host/vfio_user_pci.o 00:04:36.899 CC lib/vfio_user/host/vfio_user.o 00:04:36.899 CC lib/util/crc32_ieee.o 00:04:36.899 CC lib/util/crc64.o 00:04:36.899 CC lib/util/dif.o 00:04:36.899 LIB libspdk_dma.a 00:04:36.899 CC lib/util/fd.o 00:04:36.899 CC lib/util/fd_group.o 00:04:36.899 SO libspdk_dma.so.5.0 00:04:36.899 LIB libspdk_ioat.a 00:04:36.899 CC lib/util/file.o 00:04:36.899 CC lib/util/hexlify.o 00:04:36.899 SO libspdk_ioat.so.7.0 00:04:36.899 SYMLINK libspdk_dma.so 00:04:36.899 CC lib/util/iov.o 00:04:36.899 SYMLINK libspdk_ioat.so 00:04:36.899 CC lib/util/math.o 00:04:36.899 LIB libspdk_vfio_user.a 00:04:36.899 CC lib/util/net.o 00:04:36.899 CC lib/util/pipe.o 00:04:36.899 SO libspdk_vfio_user.so.5.0 00:04:36.899 CC lib/util/strerror_tls.o 00:04:36.899 CC lib/util/string.o 00:04:36.899 SYMLINK libspdk_vfio_user.so 00:04:36.899 CC lib/util/uuid.o 00:04:36.899 CC lib/util/xor.o 00:04:36.899 CC lib/util/zipf.o 00:04:36.899 CC lib/util/md5.o 00:04:36.899 LIB libspdk_util.a 00:04:36.899 SO libspdk_util.so.10.0 00:04:36.899 LIB libspdk_trace_parser.a 00:04:36.899 SYMLINK libspdk_util.so 00:04:36.899 SO libspdk_trace_parser.so.6.0 00:04:36.899 SYMLINK libspdk_trace_parser.so 00:04:36.899 CC lib/json/json_parse.o 00:04:36.899 CC lib/json/json_util.o 00:04:36.899 CC lib/json/json_write.o 00:04:36.899 CC lib/conf/conf.o 00:04:36.899 CC lib/env_dpdk/env.o 00:04:36.899 CC lib/env_dpdk/memory.o 00:04:36.899 CC lib/idxd/idxd.o 00:04:36.899 CC lib/rdma_provider/common.o 00:04:36.899 CC lib/rdma_utils/rdma_utils.o 00:04:36.899 CC lib/vmd/vmd.o 00:04:36.899 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:36.899 LIB libspdk_conf.a 00:04:36.899 CC lib/vmd/led.o 00:04:36.899 SO libspdk_conf.so.6.0 00:04:36.899 LIB libspdk_rdma_utils.a 00:04:36.899 CC lib/env_dpdk/pci.o 00:04:36.899 LIB libspdk_json.a 00:04:36.899 SYMLINK libspdk_conf.so 00:04:36.899 CC lib/env_dpdk/init.o 00:04:36.899 SO libspdk_rdma_utils.so.1.0 00:04:36.899 SO libspdk_json.so.6.0 00:04:36.899 LIB libspdk_rdma_provider.a 00:04:36.899 SYMLINK libspdk_rdma_utils.so 00:04:36.899 CC lib/idxd/idxd_user.o 00:04:36.899 CC lib/idxd/idxd_kernel.o 00:04:36.899 SYMLINK libspdk_json.so 00:04:36.899 CC lib/env_dpdk/threads.o 00:04:36.899 SO libspdk_rdma_provider.so.6.0 00:04:36.899 SYMLINK libspdk_rdma_provider.so 00:04:36.899 CC lib/env_dpdk/pci_ioat.o 00:04:36.899 CC lib/env_dpdk/pci_virtio.o 00:04:36.899 CC lib/jsonrpc/jsonrpc_server.o 00:04:36.899 CC lib/env_dpdk/pci_vmd.o 00:04:36.899 CC lib/env_dpdk/pci_idxd.o 00:04:36.899 CC lib/env_dpdk/pci_event.o 00:04:36.899 LIB libspdk_idxd.a 00:04:36.899 CC lib/env_dpdk/sigbus_handler.o 00:04:36.899 CC lib/env_dpdk/pci_dpdk.o 00:04:36.899 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:36.899 SO libspdk_idxd.so.12.1 00:04:36.899 LIB libspdk_vmd.a 00:04:36.899 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:36.899 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:36.899 SYMLINK libspdk_idxd.so 00:04:36.899 SO libspdk_vmd.so.6.0 00:04:36.899 CC lib/jsonrpc/jsonrpc_client.o 00:04:36.899 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:36.899 SYMLINK libspdk_vmd.so 00:04:37.159 LIB libspdk_jsonrpc.a 00:04:37.159 SO libspdk_jsonrpc.so.6.0 00:04:37.418 SYMLINK libspdk_jsonrpc.so 00:04:37.678 CC lib/rpc/rpc.o 00:04:37.937 LIB libspdk_env_dpdk.a 00:04:37.937 LIB libspdk_rpc.a 00:04:37.937 SO libspdk_rpc.so.6.0 00:04:38.197 SO libspdk_env_dpdk.so.15.0 00:04:38.197 SYMLINK libspdk_rpc.so 00:04:38.197 SYMLINK libspdk_env_dpdk.so 00:04:38.458 CC lib/trace/trace_flags.o 00:04:38.458 CC lib/trace/trace.o 00:04:38.458 CC lib/trace/trace_rpc.o 00:04:38.458 CC lib/notify/notify.o 00:04:38.458 CC lib/notify/notify_rpc.o 00:04:38.458 CC lib/keyring/keyring.o 00:04:38.458 CC lib/keyring/keyring_rpc.o 00:04:38.718 LIB libspdk_notify.a 00:04:38.718 SO libspdk_notify.so.6.0 00:04:38.718 LIB libspdk_trace.a 00:04:38.718 SYMLINK libspdk_notify.so 00:04:38.718 LIB libspdk_keyring.a 00:04:38.718 SO libspdk_trace.so.11.0 00:04:38.978 SO libspdk_keyring.so.2.0 00:04:38.978 SYMLINK libspdk_trace.so 00:04:38.979 SYMLINK libspdk_keyring.so 00:04:39.239 CC lib/thread/iobuf.o 00:04:39.239 CC lib/thread/thread.o 00:04:39.239 CC lib/sock/sock.o 00:04:39.239 CC lib/sock/sock_rpc.o 00:04:39.810 LIB libspdk_sock.a 00:04:39.810 SO libspdk_sock.so.10.0 00:04:40.070 SYMLINK libspdk_sock.so 00:04:40.331 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:40.331 CC lib/nvme/nvme_ctrlr.o 00:04:40.331 CC lib/nvme/nvme_fabric.o 00:04:40.331 CC lib/nvme/nvme.o 00:04:40.331 CC lib/nvme/nvme_ns_cmd.o 00:04:40.331 CC lib/nvme/nvme_ns.o 00:04:40.331 CC lib/nvme/nvme_qpair.o 00:04:40.331 CC lib/nvme/nvme_pcie_common.o 00:04:40.331 CC lib/nvme/nvme_pcie.o 00:04:41.271 CC lib/nvme/nvme_quirks.o 00:04:41.271 CC lib/nvme/nvme_transport.o 00:04:41.271 LIB libspdk_thread.a 00:04:41.271 SO libspdk_thread.so.10.1 00:04:41.271 CC lib/nvme/nvme_discovery.o 00:04:41.271 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:41.271 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:41.271 SYMLINK libspdk_thread.so 00:04:41.271 CC lib/nvme/nvme_tcp.o 00:04:41.271 CC lib/nvme/nvme_opal.o 00:04:41.530 CC lib/nvme/nvme_io_msg.o 00:04:41.790 CC lib/accel/accel.o 00:04:41.790 CC lib/nvme/nvme_poll_group.o 00:04:41.790 CC lib/blob/blobstore.o 00:04:42.051 CC lib/blob/request.o 00:04:42.051 CC lib/blob/zeroes.o 00:04:42.051 CC lib/blob/blob_bs_dev.o 00:04:42.051 CC lib/init/json_config.o 00:04:42.051 CC lib/nvme/nvme_zns.o 00:04:42.051 CC lib/accel/accel_rpc.o 00:04:42.311 CC lib/init/subsystem.o 00:04:42.311 CC lib/init/subsystem_rpc.o 00:04:42.311 CC lib/init/rpc.o 00:04:42.311 CC lib/nvme/nvme_stubs.o 00:04:42.311 CC lib/accel/accel_sw.o 00:04:42.572 CC lib/nvme/nvme_auth.o 00:04:42.572 LIB libspdk_init.a 00:04:42.572 SO libspdk_init.so.6.0 00:04:42.572 CC lib/virtio/virtio.o 00:04:42.572 SYMLINK libspdk_init.so 00:04:42.572 CC lib/nvme/nvme_cuse.o 00:04:42.572 CC lib/nvme/nvme_rdma.o 00:04:42.838 CC lib/virtio/virtio_vhost_user.o 00:04:42.838 CC lib/virtio/virtio_vfio_user.o 00:04:42.838 CC lib/virtio/virtio_pci.o 00:04:42.838 LIB libspdk_accel.a 00:04:43.107 SO libspdk_accel.so.16.0 00:04:43.107 SYMLINK libspdk_accel.so 00:04:43.107 CC lib/fsdev/fsdev.o 00:04:43.107 CC lib/fsdev/fsdev_io.o 00:04:43.107 CC lib/fsdev/fsdev_rpc.o 00:04:43.368 LIB libspdk_virtio.a 00:04:43.368 CC lib/event/app.o 00:04:43.368 CC lib/bdev/bdev.o 00:04:43.368 SO libspdk_virtio.so.7.0 00:04:43.368 CC lib/bdev/bdev_rpc.o 00:04:43.368 SYMLINK libspdk_virtio.so 00:04:43.368 CC lib/bdev/bdev_zone.o 00:04:43.368 CC lib/bdev/part.o 00:04:43.368 CC lib/event/reactor.o 00:04:43.629 CC lib/event/log_rpc.o 00:04:43.629 CC lib/bdev/scsi_nvme.o 00:04:43.629 CC lib/event/app_rpc.o 00:04:43.629 CC lib/event/scheduler_static.o 00:04:43.889 LIB libspdk_fsdev.a 00:04:43.889 SO libspdk_fsdev.so.1.0 00:04:43.889 LIB libspdk_event.a 00:04:44.153 SYMLINK libspdk_fsdev.so 00:04:44.153 SO libspdk_event.so.14.0 00:04:44.153 SYMLINK libspdk_event.so 00:04:44.153 LIB libspdk_nvme.a 00:04:44.414 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:44.414 SO libspdk_nvme.so.14.0 00:04:44.674 SYMLINK libspdk_nvme.so 00:04:45.245 LIB libspdk_fuse_dispatcher.a 00:04:45.245 SO libspdk_fuse_dispatcher.so.1.0 00:04:45.245 SYMLINK libspdk_fuse_dispatcher.so 00:04:45.814 LIB libspdk_blob.a 00:04:45.814 SO libspdk_blob.so.11.0 00:04:46.074 SYMLINK libspdk_blob.so 00:04:46.334 CC lib/blobfs/blobfs.o 00:04:46.334 CC lib/blobfs/tree.o 00:04:46.334 CC lib/lvol/lvol.o 00:04:46.334 LIB libspdk_bdev.a 00:04:46.334 SO libspdk_bdev.so.16.0 00:04:46.594 SYMLINK libspdk_bdev.so 00:04:46.854 CC lib/nvmf/ctrlr.o 00:04:46.854 CC lib/nvmf/ctrlr_bdev.o 00:04:46.854 CC lib/nvmf/subsystem.o 00:04:46.854 CC lib/nvmf/ctrlr_discovery.o 00:04:46.854 CC lib/ublk/ublk.o 00:04:46.854 CC lib/scsi/dev.o 00:04:46.854 CC lib/ftl/ftl_core.o 00:04:46.854 CC lib/nbd/nbd.o 00:04:47.114 CC lib/scsi/lun.o 00:04:47.373 CC lib/ftl/ftl_init.o 00:04:47.373 LIB libspdk_blobfs.a 00:04:47.373 SO libspdk_blobfs.so.10.0 00:04:47.373 CC lib/nbd/nbd_rpc.o 00:04:47.373 CC lib/scsi/port.o 00:04:47.373 SYMLINK libspdk_blobfs.so 00:04:47.373 CC lib/scsi/scsi.o 00:04:47.373 LIB libspdk_lvol.a 00:04:47.373 CC lib/nvmf/nvmf.o 00:04:47.373 SO libspdk_lvol.so.10.0 00:04:47.633 CC lib/ftl/ftl_layout.o 00:04:47.633 SYMLINK libspdk_lvol.so 00:04:47.633 CC lib/scsi/scsi_bdev.o 00:04:47.633 CC lib/ublk/ublk_rpc.o 00:04:47.633 LIB libspdk_nbd.a 00:04:47.633 CC lib/scsi/scsi_pr.o 00:04:47.633 SO libspdk_nbd.so.7.0 00:04:47.633 CC lib/nvmf/nvmf_rpc.o 00:04:47.633 SYMLINK libspdk_nbd.so 00:04:47.633 CC lib/scsi/scsi_rpc.o 00:04:47.633 CC lib/nvmf/transport.o 00:04:47.633 LIB libspdk_ublk.a 00:04:47.892 SO libspdk_ublk.so.3.0 00:04:47.892 CC lib/nvmf/tcp.o 00:04:47.892 SYMLINK libspdk_ublk.so 00:04:47.892 CC lib/ftl/ftl_debug.o 00:04:47.892 CC lib/scsi/task.o 00:04:47.892 CC lib/nvmf/stubs.o 00:04:48.152 CC lib/ftl/ftl_io.o 00:04:48.152 CC lib/nvmf/mdns_server.o 00:04:48.152 LIB libspdk_scsi.a 00:04:48.152 SO libspdk_scsi.so.9.0 00:04:48.152 SYMLINK libspdk_scsi.so 00:04:48.152 CC lib/ftl/ftl_sb.o 00:04:48.412 CC lib/nvmf/rdma.o 00:04:48.412 CC lib/ftl/ftl_l2p.o 00:04:48.412 CC lib/nvmf/auth.o 00:04:48.412 CC lib/ftl/ftl_l2p_flat.o 00:04:48.672 CC lib/ftl/ftl_nv_cache.o 00:04:48.672 CC lib/ftl/ftl_band.o 00:04:48.672 CC lib/ftl/ftl_band_ops.o 00:04:48.672 CC lib/iscsi/conn.o 00:04:48.672 CC lib/ftl/ftl_writer.o 00:04:48.672 CC lib/vhost/vhost.o 00:04:48.931 CC lib/ftl/ftl_rq.o 00:04:48.931 CC lib/ftl/ftl_reloc.o 00:04:49.191 CC lib/vhost/vhost_rpc.o 00:04:49.191 CC lib/vhost/vhost_scsi.o 00:04:49.191 CC lib/ftl/ftl_l2p_cache.o 00:04:49.451 CC lib/iscsi/init_grp.o 00:04:49.451 CC lib/ftl/ftl_p2l.o 00:04:49.451 CC lib/vhost/vhost_blk.o 00:04:49.451 CC lib/vhost/rte_vhost_user.o 00:04:49.712 CC lib/iscsi/iscsi.o 00:04:49.712 CC lib/ftl/ftl_p2l_log.o 00:04:49.712 CC lib/ftl/mngt/ftl_mngt.o 00:04:49.712 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:49.712 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:49.972 CC lib/iscsi/param.o 00:04:49.972 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:49.972 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:49.972 CC lib/iscsi/portal_grp.o 00:04:49.972 CC lib/iscsi/tgt_node.o 00:04:49.972 CC lib/iscsi/iscsi_subsystem.o 00:04:50.232 CC lib/iscsi/iscsi_rpc.o 00:04:50.232 CC lib/iscsi/task.o 00:04:50.232 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:50.492 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:50.492 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:50.492 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:50.492 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:50.492 LIB libspdk_vhost.a 00:04:50.492 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:50.492 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:50.492 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:50.492 CC lib/ftl/utils/ftl_conf.o 00:04:50.751 SO libspdk_vhost.so.8.0 00:04:50.751 CC lib/ftl/utils/ftl_md.o 00:04:50.751 SYMLINK libspdk_vhost.so 00:04:50.751 CC lib/ftl/utils/ftl_mempool.o 00:04:50.751 CC lib/ftl/utils/ftl_bitmap.o 00:04:50.751 CC lib/ftl/utils/ftl_property.o 00:04:50.751 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:50.751 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:50.751 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:51.012 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:51.012 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:51.012 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:51.012 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:51.012 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:51.012 LIB libspdk_nvmf.a 00:04:51.012 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:51.012 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:51.272 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:51.272 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:51.272 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:51.272 SO libspdk_nvmf.so.19.0 00:04:51.272 CC lib/ftl/base/ftl_base_dev.o 00:04:51.272 CC lib/ftl/base/ftl_base_bdev.o 00:04:51.272 CC lib/ftl/ftl_trace.o 00:04:51.272 LIB libspdk_iscsi.a 00:04:51.272 SO libspdk_iscsi.so.8.0 00:04:51.532 SYMLINK libspdk_nvmf.so 00:04:51.532 LIB libspdk_ftl.a 00:04:51.532 SYMLINK libspdk_iscsi.so 00:04:51.792 SO libspdk_ftl.so.9.0 00:04:52.053 SYMLINK libspdk_ftl.so 00:04:52.623 CC module/env_dpdk/env_dpdk_rpc.o 00:04:52.623 CC module/blob/bdev/blob_bdev.o 00:04:52.623 CC module/keyring/linux/keyring.o 00:04:52.623 CC module/fsdev/aio/fsdev_aio.o 00:04:52.623 CC module/accel/dsa/accel_dsa.o 00:04:52.623 CC module/keyring/file/keyring.o 00:04:52.623 CC module/accel/error/accel_error.o 00:04:52.623 CC module/accel/ioat/accel_ioat.o 00:04:52.623 CC module/sock/posix/posix.o 00:04:52.623 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:52.623 LIB libspdk_env_dpdk_rpc.a 00:04:52.623 SO libspdk_env_dpdk_rpc.so.6.0 00:04:52.623 CC module/keyring/linux/keyring_rpc.o 00:04:52.623 CC module/keyring/file/keyring_rpc.o 00:04:52.623 SYMLINK libspdk_env_dpdk_rpc.so 00:04:52.623 CC module/accel/error/accel_error_rpc.o 00:04:52.623 CC module/accel/dsa/accel_dsa_rpc.o 00:04:52.623 CC module/accel/ioat/accel_ioat_rpc.o 00:04:52.623 LIB libspdk_scheduler_dynamic.a 00:04:52.883 SO libspdk_scheduler_dynamic.so.4.0 00:04:52.883 LIB libspdk_keyring_file.a 00:04:52.883 LIB libspdk_keyring_linux.a 00:04:52.883 LIB libspdk_accel_error.a 00:04:52.883 SYMLINK libspdk_scheduler_dynamic.so 00:04:52.883 LIB libspdk_blob_bdev.a 00:04:52.883 SO libspdk_keyring_file.so.2.0 00:04:52.883 LIB libspdk_accel_dsa.a 00:04:52.883 SO libspdk_keyring_linux.so.1.0 00:04:52.883 LIB libspdk_accel_ioat.a 00:04:52.883 SO libspdk_blob_bdev.so.11.0 00:04:52.883 SO libspdk_accel_error.so.2.0 00:04:52.883 SO libspdk_accel_dsa.so.5.0 00:04:52.883 SO libspdk_accel_ioat.so.6.0 00:04:52.883 SYMLINK libspdk_keyring_file.so 00:04:52.883 SYMLINK libspdk_keyring_linux.so 00:04:52.883 SYMLINK libspdk_accel_error.so 00:04:52.883 SYMLINK libspdk_accel_ioat.so 00:04:52.883 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:52.883 CC module/fsdev/aio/linux_aio_mgr.o 00:04:52.883 SYMLINK libspdk_accel_dsa.so 00:04:52.883 SYMLINK libspdk_blob_bdev.so 00:04:53.144 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:53.144 CC module/scheduler/gscheduler/gscheduler.o 00:04:53.144 CC module/accel/iaa/accel_iaa.o 00:04:53.144 CC module/accel/iaa/accel_iaa_rpc.o 00:04:53.144 LIB libspdk_scheduler_dpdk_governor.a 00:04:53.144 LIB libspdk_scheduler_gscheduler.a 00:04:53.144 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:53.144 CC module/bdev/delay/vbdev_delay.o 00:04:53.144 SO libspdk_scheduler_gscheduler.so.4.0 00:04:53.144 CC module/blobfs/bdev/blobfs_bdev.o 00:04:53.144 CC module/bdev/error/vbdev_error.o 00:04:53.403 LIB libspdk_fsdev_aio.a 00:04:53.403 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:53.403 CC module/bdev/gpt/gpt.o 00:04:53.403 SYMLINK libspdk_scheduler_gscheduler.so 00:04:53.403 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:53.403 CC module/bdev/gpt/vbdev_gpt.o 00:04:53.403 SO libspdk_fsdev_aio.so.1.0 00:04:53.403 LIB libspdk_accel_iaa.a 00:04:53.403 SO libspdk_accel_iaa.so.3.0 00:04:53.404 SYMLINK libspdk_fsdev_aio.so 00:04:53.404 LIB libspdk_sock_posix.a 00:04:53.404 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:53.404 SO libspdk_sock_posix.so.6.0 00:04:53.404 SYMLINK libspdk_accel_iaa.so 00:04:53.404 CC module/bdev/lvol/vbdev_lvol.o 00:04:53.664 CC module/bdev/error/vbdev_error_rpc.o 00:04:53.664 SYMLINK libspdk_sock_posix.so 00:04:53.664 CC module/bdev/malloc/bdev_malloc.o 00:04:53.664 LIB libspdk_bdev_gpt.a 00:04:53.664 LIB libspdk_blobfs_bdev.a 00:04:53.664 CC module/bdev/null/bdev_null.o 00:04:53.664 SO libspdk_bdev_gpt.so.6.0 00:04:53.664 SO libspdk_blobfs_bdev.so.6.0 00:04:53.664 CC module/bdev/nvme/bdev_nvme.o 00:04:53.664 LIB libspdk_bdev_delay.a 00:04:53.664 CC module/bdev/passthru/vbdev_passthru.o 00:04:53.664 LIB libspdk_bdev_error.a 00:04:53.664 SO libspdk_bdev_delay.so.6.0 00:04:53.664 SYMLINK libspdk_bdev_gpt.so 00:04:53.664 SO libspdk_bdev_error.so.6.0 00:04:53.664 CC module/bdev/raid/bdev_raid.o 00:04:53.664 SYMLINK libspdk_blobfs_bdev.so 00:04:53.664 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:53.664 CC module/bdev/nvme/nvme_rpc.o 00:04:53.664 SYMLINK libspdk_bdev_delay.so 00:04:53.664 CC module/bdev/nvme/bdev_mdns_client.o 00:04:53.664 SYMLINK libspdk_bdev_error.so 00:04:53.924 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:53.924 CC module/bdev/null/bdev_null_rpc.o 00:04:53.924 CC module/bdev/nvme/vbdev_opal.o 00:04:53.924 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:53.924 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:54.182 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:54.182 CC module/bdev/split/vbdev_split.o 00:04:54.182 LIB libspdk_bdev_null.a 00:04:54.182 SO libspdk_bdev_null.so.6.0 00:04:54.182 LIB libspdk_bdev_passthru.a 00:04:54.182 LIB libspdk_bdev_malloc.a 00:04:54.182 LIB libspdk_bdev_lvol.a 00:04:54.182 SO libspdk_bdev_passthru.so.6.0 00:04:54.182 SO libspdk_bdev_malloc.so.6.0 00:04:54.182 SYMLINK libspdk_bdev_null.so 00:04:54.182 CC module/bdev/split/vbdev_split_rpc.o 00:04:54.182 SO libspdk_bdev_lvol.so.6.0 00:04:54.182 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:54.182 SYMLINK libspdk_bdev_malloc.so 00:04:54.441 SYMLINK libspdk_bdev_passthru.so 00:04:54.441 SYMLINK libspdk_bdev_lvol.so 00:04:54.441 CC module/bdev/raid/bdev_raid_rpc.o 00:04:54.441 CC module/bdev/raid/bdev_raid_sb.o 00:04:54.441 CC module/bdev/raid/raid0.o 00:04:54.441 LIB libspdk_bdev_split.a 00:04:54.441 SO libspdk_bdev_split.so.6.0 00:04:54.441 CC module/bdev/aio/bdev_aio.o 00:04:54.441 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:54.441 CC module/bdev/ftl/bdev_ftl.o 00:04:54.441 SYMLINK libspdk_bdev_split.so 00:04:54.441 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:54.701 CC module/bdev/iscsi/bdev_iscsi.o 00:04:54.701 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:54.701 CC module/bdev/raid/raid1.o 00:04:54.701 CC module/bdev/raid/concat.o 00:04:54.701 CC module/bdev/raid/raid5f.o 00:04:54.701 LIB libspdk_bdev_ftl.a 00:04:54.961 SO libspdk_bdev_ftl.so.6.0 00:04:54.961 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:54.961 CC module/bdev/aio/bdev_aio_rpc.o 00:04:54.961 SYMLINK libspdk_bdev_ftl.so 00:04:54.961 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:54.961 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:54.961 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:54.961 LIB libspdk_bdev_iscsi.a 00:04:54.961 SO libspdk_bdev_iscsi.so.6.0 00:04:54.961 LIB libspdk_bdev_zone_block.a 00:04:54.961 LIB libspdk_bdev_aio.a 00:04:54.961 SYMLINK libspdk_bdev_iscsi.so 00:04:54.961 SO libspdk_bdev_zone_block.so.6.0 00:04:54.961 SO libspdk_bdev_aio.so.6.0 00:04:55.220 SYMLINK libspdk_bdev_zone_block.so 00:04:55.220 SYMLINK libspdk_bdev_aio.so 00:04:55.220 LIB libspdk_bdev_raid.a 00:04:55.480 SO libspdk_bdev_raid.so.6.0 00:04:55.480 LIB libspdk_bdev_virtio.a 00:04:55.480 SO libspdk_bdev_virtio.so.6.0 00:04:55.480 SYMLINK libspdk_bdev_raid.so 00:04:55.480 SYMLINK libspdk_bdev_virtio.so 00:04:56.419 LIB libspdk_bdev_nvme.a 00:04:56.419 SO libspdk_bdev_nvme.so.7.0 00:04:56.679 SYMLINK libspdk_bdev_nvme.so 00:04:57.249 CC module/event/subsystems/iobuf/iobuf.o 00:04:57.249 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:57.249 CC module/event/subsystems/vmd/vmd.o 00:04:57.249 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:57.249 CC module/event/subsystems/sock/sock.o 00:04:57.249 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:57.249 CC module/event/subsystems/scheduler/scheduler.o 00:04:57.249 CC module/event/subsystems/keyring/keyring.o 00:04:57.249 CC module/event/subsystems/fsdev/fsdev.o 00:04:57.249 LIB libspdk_event_keyring.a 00:04:57.249 LIB libspdk_event_vhost_blk.a 00:04:57.249 LIB libspdk_event_sock.a 00:04:57.249 LIB libspdk_event_vmd.a 00:04:57.249 LIB libspdk_event_scheduler.a 00:04:57.249 LIB libspdk_event_iobuf.a 00:04:57.249 LIB libspdk_event_fsdev.a 00:04:57.249 SO libspdk_event_keyring.so.1.0 00:04:57.249 SO libspdk_event_vhost_blk.so.3.0 00:04:57.249 SO libspdk_event_sock.so.5.0 00:04:57.249 SO libspdk_event_scheduler.so.4.0 00:04:57.249 SO libspdk_event_vmd.so.6.0 00:04:57.249 SO libspdk_event_iobuf.so.3.0 00:04:57.249 SO libspdk_event_fsdev.so.1.0 00:04:57.249 SYMLINK libspdk_event_keyring.so 00:04:57.249 SYMLINK libspdk_event_vhost_blk.so 00:04:57.249 SYMLINK libspdk_event_sock.so 00:04:57.249 SYMLINK libspdk_event_scheduler.so 00:04:57.249 SYMLINK libspdk_event_iobuf.so 00:04:57.249 SYMLINK libspdk_event_fsdev.so 00:04:57.249 SYMLINK libspdk_event_vmd.so 00:04:57.819 CC module/event/subsystems/accel/accel.o 00:04:57.819 LIB libspdk_event_accel.a 00:04:58.079 SO libspdk_event_accel.so.6.0 00:04:58.079 SYMLINK libspdk_event_accel.so 00:04:58.339 CC module/event/subsystems/bdev/bdev.o 00:04:58.599 LIB libspdk_event_bdev.a 00:04:58.599 SO libspdk_event_bdev.so.6.0 00:04:58.599 SYMLINK libspdk_event_bdev.so 00:04:59.167 CC module/event/subsystems/scsi/scsi.o 00:04:59.167 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:59.167 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:59.167 CC module/event/subsystems/ublk/ublk.o 00:04:59.167 CC module/event/subsystems/nbd/nbd.o 00:04:59.167 LIB libspdk_event_scsi.a 00:04:59.167 LIB libspdk_event_ublk.a 00:04:59.167 LIB libspdk_event_nbd.a 00:04:59.167 SO libspdk_event_scsi.so.6.0 00:04:59.167 SO libspdk_event_ublk.so.3.0 00:04:59.167 SO libspdk_event_nbd.so.6.0 00:04:59.167 SYMLINK libspdk_event_scsi.so 00:04:59.435 SYMLINK libspdk_event_nbd.so 00:04:59.435 LIB libspdk_event_nvmf.a 00:04:59.435 SYMLINK libspdk_event_ublk.so 00:04:59.435 SO libspdk_event_nvmf.so.6.0 00:04:59.435 SYMLINK libspdk_event_nvmf.so 00:04:59.715 CC module/event/subsystems/iscsi/iscsi.o 00:04:59.715 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:59.715 LIB libspdk_event_iscsi.a 00:04:59.715 SO libspdk_event_iscsi.so.6.0 00:04:59.715 LIB libspdk_event_vhost_scsi.a 00:04:59.975 SYMLINK libspdk_event_iscsi.so 00:04:59.975 SO libspdk_event_vhost_scsi.so.3.0 00:04:59.975 SYMLINK libspdk_event_vhost_scsi.so 00:05:00.235 SO libspdk.so.6.0 00:05:00.235 SYMLINK libspdk.so 00:05:00.495 TEST_HEADER include/spdk/accel.h 00:05:00.495 CXX app/trace/trace.o 00:05:00.495 TEST_HEADER include/spdk/accel_module.h 00:05:00.495 TEST_HEADER include/spdk/assert.h 00:05:00.495 TEST_HEADER include/spdk/barrier.h 00:05:00.495 CC app/trace_record/trace_record.o 00:05:00.495 TEST_HEADER include/spdk/base64.h 00:05:00.495 TEST_HEADER include/spdk/bdev.h 00:05:00.495 TEST_HEADER include/spdk/bdev_module.h 00:05:00.495 TEST_HEADER include/spdk/bdev_zone.h 00:05:00.495 TEST_HEADER include/spdk/bit_array.h 00:05:00.495 TEST_HEADER include/spdk/bit_pool.h 00:05:00.495 TEST_HEADER include/spdk/blob_bdev.h 00:05:00.495 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:00.495 TEST_HEADER include/spdk/blobfs.h 00:05:00.495 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:00.495 TEST_HEADER include/spdk/blob.h 00:05:00.495 TEST_HEADER include/spdk/conf.h 00:05:00.495 TEST_HEADER include/spdk/config.h 00:05:00.495 TEST_HEADER include/spdk/cpuset.h 00:05:00.495 TEST_HEADER include/spdk/crc16.h 00:05:00.495 TEST_HEADER include/spdk/crc32.h 00:05:00.495 TEST_HEADER include/spdk/crc64.h 00:05:00.495 TEST_HEADER include/spdk/dif.h 00:05:00.495 TEST_HEADER include/spdk/dma.h 00:05:00.495 TEST_HEADER include/spdk/endian.h 00:05:00.495 TEST_HEADER include/spdk/env_dpdk.h 00:05:00.495 TEST_HEADER include/spdk/env.h 00:05:00.495 TEST_HEADER include/spdk/event.h 00:05:00.495 TEST_HEADER include/spdk/fd_group.h 00:05:00.495 TEST_HEADER include/spdk/fd.h 00:05:00.495 TEST_HEADER include/spdk/file.h 00:05:00.495 TEST_HEADER include/spdk/fsdev.h 00:05:00.495 TEST_HEADER include/spdk/fsdev_module.h 00:05:00.495 TEST_HEADER include/spdk/ftl.h 00:05:00.495 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:00.495 TEST_HEADER include/spdk/gpt_spec.h 00:05:00.495 TEST_HEADER include/spdk/hexlify.h 00:05:00.495 TEST_HEADER include/spdk/histogram_data.h 00:05:00.495 TEST_HEADER include/spdk/idxd.h 00:05:00.495 TEST_HEADER include/spdk/idxd_spec.h 00:05:00.495 TEST_HEADER include/spdk/init.h 00:05:00.495 CC examples/util/zipf/zipf.o 00:05:00.495 CC test/thread/poller_perf/poller_perf.o 00:05:00.495 TEST_HEADER include/spdk/ioat.h 00:05:00.495 TEST_HEADER include/spdk/ioat_spec.h 00:05:00.495 TEST_HEADER include/spdk/iscsi_spec.h 00:05:00.495 CC examples/ioat/perf/perf.o 00:05:00.495 TEST_HEADER include/spdk/json.h 00:05:00.495 TEST_HEADER include/spdk/jsonrpc.h 00:05:00.495 TEST_HEADER include/spdk/keyring.h 00:05:00.495 TEST_HEADER include/spdk/keyring_module.h 00:05:00.495 TEST_HEADER include/spdk/likely.h 00:05:00.495 TEST_HEADER include/spdk/log.h 00:05:00.495 TEST_HEADER include/spdk/lvol.h 00:05:00.495 TEST_HEADER include/spdk/md5.h 00:05:00.495 TEST_HEADER include/spdk/memory.h 00:05:00.495 TEST_HEADER include/spdk/mmio.h 00:05:00.495 TEST_HEADER include/spdk/nbd.h 00:05:00.495 TEST_HEADER include/spdk/net.h 00:05:00.495 TEST_HEADER include/spdk/notify.h 00:05:00.495 TEST_HEADER include/spdk/nvme.h 00:05:00.495 CC test/app/bdev_svc/bdev_svc.o 00:05:00.495 TEST_HEADER include/spdk/nvme_intel.h 00:05:00.495 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:00.495 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:00.495 CC test/dma/test_dma/test_dma.o 00:05:00.495 TEST_HEADER include/spdk/nvme_spec.h 00:05:00.495 TEST_HEADER include/spdk/nvme_zns.h 00:05:00.495 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:00.495 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:00.495 TEST_HEADER include/spdk/nvmf.h 00:05:00.756 TEST_HEADER include/spdk/nvmf_spec.h 00:05:00.756 TEST_HEADER include/spdk/nvmf_transport.h 00:05:00.756 TEST_HEADER include/spdk/opal.h 00:05:00.756 TEST_HEADER include/spdk/opal_spec.h 00:05:00.756 TEST_HEADER include/spdk/pci_ids.h 00:05:00.756 TEST_HEADER include/spdk/pipe.h 00:05:00.756 TEST_HEADER include/spdk/queue.h 00:05:00.756 TEST_HEADER include/spdk/reduce.h 00:05:00.756 TEST_HEADER include/spdk/rpc.h 00:05:00.756 TEST_HEADER include/spdk/scheduler.h 00:05:00.756 TEST_HEADER include/spdk/scsi.h 00:05:00.756 TEST_HEADER include/spdk/scsi_spec.h 00:05:00.756 TEST_HEADER include/spdk/sock.h 00:05:00.756 TEST_HEADER include/spdk/stdinc.h 00:05:00.756 TEST_HEADER include/spdk/string.h 00:05:00.756 TEST_HEADER include/spdk/thread.h 00:05:00.756 CC test/env/mem_callbacks/mem_callbacks.o 00:05:00.756 TEST_HEADER include/spdk/trace.h 00:05:00.756 TEST_HEADER include/spdk/trace_parser.h 00:05:00.756 TEST_HEADER include/spdk/tree.h 00:05:00.756 TEST_HEADER include/spdk/ublk.h 00:05:00.756 TEST_HEADER include/spdk/util.h 00:05:00.756 TEST_HEADER include/spdk/uuid.h 00:05:00.756 TEST_HEADER include/spdk/version.h 00:05:00.756 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:00.756 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:00.756 TEST_HEADER include/spdk/vhost.h 00:05:00.756 TEST_HEADER include/spdk/vmd.h 00:05:00.756 TEST_HEADER include/spdk/xor.h 00:05:00.756 TEST_HEADER include/spdk/zipf.h 00:05:00.756 CXX test/cpp_headers/accel.o 00:05:00.756 LINK poller_perf 00:05:00.756 LINK interrupt_tgt 00:05:00.756 LINK zipf 00:05:00.756 LINK spdk_trace_record 00:05:00.756 LINK ioat_perf 00:05:00.756 LINK bdev_svc 00:05:00.756 CXX test/cpp_headers/accel_module.o 00:05:00.756 CXX test/cpp_headers/assert.o 00:05:01.016 CXX test/cpp_headers/barrier.o 00:05:01.016 LINK spdk_trace 00:05:01.016 CXX test/cpp_headers/base64.o 00:05:01.016 CXX test/cpp_headers/bdev.o 00:05:01.016 CXX test/cpp_headers/bdev_module.o 00:05:01.016 CXX test/cpp_headers/bdev_zone.o 00:05:01.016 CC examples/ioat/verify/verify.o 00:05:01.277 CC app/nvmf_tgt/nvmf_main.o 00:05:01.277 CC app/iscsi_tgt/iscsi_tgt.o 00:05:01.277 LINK test_dma 00:05:01.277 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:01.277 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:01.277 CXX test/cpp_headers/bit_array.o 00:05:01.277 LINK mem_callbacks 00:05:01.277 CC app/spdk_tgt/spdk_tgt.o 00:05:01.277 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:01.277 LINK verify 00:05:01.277 LINK nvmf_tgt 00:05:01.277 CXX test/cpp_headers/bit_pool.o 00:05:01.277 LINK iscsi_tgt 00:05:01.537 CXX test/cpp_headers/blob_bdev.o 00:05:01.537 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:01.537 CC test/env/vtophys/vtophys.o 00:05:01.537 LINK spdk_tgt 00:05:01.537 LINK vtophys 00:05:01.798 CXX test/cpp_headers/blobfs_bdev.o 00:05:01.798 CXX test/cpp_headers/blobfs.o 00:05:01.798 CC app/spdk_lspci/spdk_lspci.o 00:05:01.798 LINK nvme_fuzz 00:05:01.798 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:01.798 CC examples/thread/thread/thread_ex.o 00:05:01.798 CC examples/sock/hello_world/hello_sock.o 00:05:01.798 LINK spdk_lspci 00:05:01.798 CXX test/cpp_headers/blob.o 00:05:01.798 CC app/spdk_nvme_perf/perf.o 00:05:01.798 LINK env_dpdk_post_init 00:05:02.071 CC app/spdk_nvme_identify/identify.o 00:05:02.071 CC test/app/histogram_perf/histogram_perf.o 00:05:02.071 LINK vhost_fuzz 00:05:02.071 LINK thread 00:05:02.071 CXX test/cpp_headers/conf.o 00:05:02.071 LINK hello_sock 00:05:02.071 CXX test/cpp_headers/config.o 00:05:02.071 LINK histogram_perf 00:05:02.071 CC test/app/jsoncat/jsoncat.o 00:05:02.071 CC test/env/memory/memory_ut.o 00:05:02.071 CXX test/cpp_headers/cpuset.o 00:05:02.332 CXX test/cpp_headers/crc16.o 00:05:02.332 CC test/env/pci/pci_ut.o 00:05:02.332 LINK jsoncat 00:05:02.332 CC examples/vmd/lsvmd/lsvmd.o 00:05:02.332 CC examples/vmd/led/led.o 00:05:02.332 CXX test/cpp_headers/crc32.o 00:05:02.332 CC app/spdk_nvme_discover/discovery_aer.o 00:05:02.592 LINK lsvmd 00:05:02.592 LINK led 00:05:02.592 CXX test/cpp_headers/crc64.o 00:05:02.592 CC app/spdk_top/spdk_top.o 00:05:02.592 LINK spdk_nvme_discover 00:05:02.592 LINK pci_ut 00:05:02.592 CXX test/cpp_headers/dif.o 00:05:02.852 CC test/app/stub/stub.o 00:05:02.852 LINK spdk_nvme_perf 00:05:02.852 CC examples/idxd/perf/perf.o 00:05:02.852 CXX test/cpp_headers/dma.o 00:05:02.852 CXX test/cpp_headers/endian.o 00:05:02.852 LINK stub 00:05:02.852 LINK spdk_nvme_identify 00:05:03.112 CXX test/cpp_headers/env_dpdk.o 00:05:03.112 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:03.112 LINK idxd_perf 00:05:03.112 CC examples/accel/perf/accel_perf.o 00:05:03.112 CXX test/cpp_headers/env.o 00:05:03.112 CC examples/blob/hello_world/hello_blob.o 00:05:03.372 CC app/vhost/vhost.o 00:05:03.372 LINK iscsi_fuzz 00:05:03.372 CXX test/cpp_headers/event.o 00:05:03.372 CC examples/nvme/hello_world/hello_world.o 00:05:03.372 LINK memory_ut 00:05:03.372 CC examples/nvme/reconnect/reconnect.o 00:05:03.372 LINK hello_fsdev 00:05:03.372 LINK hello_blob 00:05:03.372 LINK vhost 00:05:03.372 CXX test/cpp_headers/fd_group.o 00:05:03.633 CXX test/cpp_headers/fd.o 00:05:03.633 LINK spdk_top 00:05:03.633 LINK hello_world 00:05:03.633 CXX test/cpp_headers/file.o 00:05:03.633 CC test/rpc_client/rpc_client_test.o 00:05:03.633 LINK accel_perf 00:05:03.893 CC examples/blob/cli/blobcli.o 00:05:03.893 LINK reconnect 00:05:03.893 CXX test/cpp_headers/fsdev.o 00:05:03.893 CC test/accel/dif/dif.o 00:05:03.893 CC test/event/event_perf/event_perf.o 00:05:03.893 CC test/blobfs/mkfs/mkfs.o 00:05:03.893 CC app/spdk_dd/spdk_dd.o 00:05:03.893 LINK rpc_client_test 00:05:03.893 CC test/event/reactor/reactor.o 00:05:03.893 CC test/lvol/esnap/esnap.o 00:05:03.893 CXX test/cpp_headers/fsdev_module.o 00:05:03.893 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:03.893 LINK event_perf 00:05:04.152 LINK mkfs 00:05:04.152 LINK reactor 00:05:04.152 CC test/event/reactor_perf/reactor_perf.o 00:05:04.152 CXX test/cpp_headers/ftl.o 00:05:04.152 LINK blobcli 00:05:04.152 LINK spdk_dd 00:05:04.152 LINK reactor_perf 00:05:04.412 CC examples/nvme/arbitration/arbitration.o 00:05:04.412 CC test/nvme/aer/aer.o 00:05:04.412 CXX test/cpp_headers/fuse_dispatcher.o 00:05:04.412 CC examples/nvme/hotplug/hotplug.o 00:05:04.412 CXX test/cpp_headers/gpt_spec.o 00:05:04.412 LINK nvme_manage 00:05:04.412 CC test/event/app_repeat/app_repeat.o 00:05:04.412 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:04.671 CC app/fio/nvme/fio_plugin.o 00:05:04.671 LINK dif 00:05:04.671 LINK hotplug 00:05:04.671 LINK aer 00:05:04.671 LINK arbitration 00:05:04.671 CXX test/cpp_headers/hexlify.o 00:05:04.671 LINK app_repeat 00:05:04.671 LINK cmb_copy 00:05:04.671 CXX test/cpp_headers/histogram_data.o 00:05:04.671 CC examples/bdev/hello_world/hello_bdev.o 00:05:04.932 CC examples/nvme/abort/abort.o 00:05:04.932 CC test/nvme/reset/reset.o 00:05:04.932 CXX test/cpp_headers/idxd.o 00:05:04.932 CC test/event/scheduler/scheduler.o 00:05:04.932 CC examples/bdev/bdevperf/bdevperf.o 00:05:04.932 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:04.932 CC test/nvme/sgl/sgl.o 00:05:04.932 CXX test/cpp_headers/idxd_spec.o 00:05:04.932 LINK hello_bdev 00:05:05.192 LINK pmr_persistence 00:05:05.192 LINK reset 00:05:05.192 LINK scheduler 00:05:05.192 CXX test/cpp_headers/init.o 00:05:05.192 LINK spdk_nvme 00:05:05.192 LINK abort 00:05:05.192 LINK sgl 00:05:05.192 CXX test/cpp_headers/ioat.o 00:05:05.452 CC test/nvme/e2edp/nvme_dp.o 00:05:05.452 CXX test/cpp_headers/ioat_spec.o 00:05:05.452 CC test/nvme/overhead/overhead.o 00:05:05.452 CC app/fio/bdev/fio_plugin.o 00:05:05.452 CXX test/cpp_headers/iscsi_spec.o 00:05:05.452 CC test/bdev/bdevio/bdevio.o 00:05:05.452 CC test/nvme/err_injection/err_injection.o 00:05:05.452 CXX test/cpp_headers/json.o 00:05:05.452 CXX test/cpp_headers/jsonrpc.o 00:05:05.452 CC test/nvme/startup/startup.o 00:05:05.712 LINK nvme_dp 00:05:05.712 LINK err_injection 00:05:05.712 LINK overhead 00:05:05.712 CXX test/cpp_headers/keyring.o 00:05:05.712 CC test/nvme/reserve/reserve.o 00:05:05.712 LINK startup 00:05:05.712 LINK bdevperf 00:05:05.712 LINK bdevio 00:05:05.712 CXX test/cpp_headers/keyring_module.o 00:05:05.971 CC test/nvme/simple_copy/simple_copy.o 00:05:05.971 CC test/nvme/connect_stress/connect_stress.o 00:05:05.971 LINK spdk_bdev 00:05:05.971 CC test/nvme/boot_partition/boot_partition.o 00:05:05.971 LINK reserve 00:05:05.971 CXX test/cpp_headers/likely.o 00:05:05.971 CC test/nvme/compliance/nvme_compliance.o 00:05:05.971 LINK connect_stress 00:05:05.971 CXX test/cpp_headers/log.o 00:05:05.971 CC test/nvme/fused_ordering/fused_ordering.o 00:05:05.971 LINK boot_partition 00:05:05.971 LINK simple_copy 00:05:06.232 CC examples/nvmf/nvmf/nvmf.o 00:05:06.232 CXX test/cpp_headers/lvol.o 00:05:06.232 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:06.232 CXX test/cpp_headers/md5.o 00:05:06.232 CC test/nvme/fdp/fdp.o 00:05:06.232 CXX test/cpp_headers/memory.o 00:05:06.232 LINK fused_ordering 00:05:06.232 CC test/nvme/cuse/cuse.o 00:05:06.232 CXX test/cpp_headers/mmio.o 00:05:06.232 LINK nvme_compliance 00:05:06.232 CXX test/cpp_headers/nbd.o 00:05:06.491 CXX test/cpp_headers/net.o 00:05:06.491 LINK doorbell_aers 00:05:06.491 CXX test/cpp_headers/notify.o 00:05:06.492 CXX test/cpp_headers/nvme.o 00:05:06.492 LINK nvmf 00:05:06.492 CXX test/cpp_headers/nvme_intel.o 00:05:06.492 CXX test/cpp_headers/nvme_ocssd.o 00:05:06.492 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:06.492 LINK fdp 00:05:06.492 CXX test/cpp_headers/nvme_spec.o 00:05:06.492 CXX test/cpp_headers/nvme_zns.o 00:05:06.492 CXX test/cpp_headers/nvmf_cmd.o 00:05:06.492 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:06.492 CXX test/cpp_headers/nvmf.o 00:05:06.751 CXX test/cpp_headers/nvmf_spec.o 00:05:06.751 CXX test/cpp_headers/nvmf_transport.o 00:05:06.751 CXX test/cpp_headers/opal.o 00:05:06.751 CXX test/cpp_headers/opal_spec.o 00:05:06.751 CXX test/cpp_headers/pci_ids.o 00:05:06.751 CXX test/cpp_headers/pipe.o 00:05:06.751 CXX test/cpp_headers/queue.o 00:05:06.751 CXX test/cpp_headers/reduce.o 00:05:06.751 CXX test/cpp_headers/rpc.o 00:05:06.751 CXX test/cpp_headers/scheduler.o 00:05:06.751 CXX test/cpp_headers/scsi.o 00:05:06.751 CXX test/cpp_headers/scsi_spec.o 00:05:07.011 CXX test/cpp_headers/sock.o 00:05:07.011 CXX test/cpp_headers/stdinc.o 00:05:07.011 CXX test/cpp_headers/string.o 00:05:07.011 CXX test/cpp_headers/thread.o 00:05:07.011 CXX test/cpp_headers/trace.o 00:05:07.011 CXX test/cpp_headers/trace_parser.o 00:05:07.011 CXX test/cpp_headers/tree.o 00:05:07.011 CXX test/cpp_headers/ublk.o 00:05:07.011 CXX test/cpp_headers/util.o 00:05:07.011 CXX test/cpp_headers/uuid.o 00:05:07.011 CXX test/cpp_headers/version.o 00:05:07.011 CXX test/cpp_headers/vfio_user_pci.o 00:05:07.011 CXX test/cpp_headers/vfio_user_spec.o 00:05:07.011 CXX test/cpp_headers/vhost.o 00:05:07.011 CXX test/cpp_headers/vmd.o 00:05:07.011 CXX test/cpp_headers/xor.o 00:05:07.270 CXX test/cpp_headers/zipf.o 00:05:07.530 LINK cuse 00:05:09.462 LINK esnap 00:05:10.032 00:05:10.032 real 1m20.991s 00:05:10.032 user 6m10.687s 00:05:10.032 sys 1m18.162s 00:05:10.032 12:25:15 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:10.032 12:25:15 make -- common/autotest_common.sh@10 -- $ set +x 00:05:10.032 ************************************ 00:05:10.032 END TEST make 00:05:10.032 ************************************ 00:05:10.032 12:25:15 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:10.032 12:25:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:10.032 12:25:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:10.032 12:25:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:10.032 12:25:15 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:10.032 12:25:15 -- pm/common@44 -- $ pid=6196 00:05:10.032 12:25:15 -- pm/common@50 -- $ kill -TERM 6196 00:05:10.032 12:25:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:10.032 12:25:15 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:10.032 12:25:15 -- pm/common@44 -- $ pid=6198 00:05:10.032 12:25:15 -- pm/common@50 -- $ kill -TERM 6198 00:05:10.032 12:25:15 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:10.032 12:25:15 -- common/autotest_common.sh@1681 -- # lcov --version 00:05:10.032 12:25:15 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:10.032 12:25:15 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:10.032 12:25:15 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.032 12:25:15 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.032 12:25:15 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.032 12:25:15 -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.032 12:25:15 -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.292 12:25:15 -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.292 12:25:15 -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.292 12:25:15 -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.292 12:25:15 -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.292 12:25:15 -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.292 12:25:15 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.292 12:25:15 -- scripts/common.sh@344 -- # case "$op" in 00:05:10.292 12:25:15 -- scripts/common.sh@345 -- # : 1 00:05:10.292 12:25:15 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.292 12:25:15 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.292 12:25:15 -- scripts/common.sh@365 -- # decimal 1 00:05:10.292 12:25:15 -- scripts/common.sh@353 -- # local d=1 00:05:10.292 12:25:15 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.292 12:25:15 -- scripts/common.sh@355 -- # echo 1 00:05:10.292 12:25:15 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.292 12:25:15 -- scripts/common.sh@366 -- # decimal 2 00:05:10.292 12:25:15 -- scripts/common.sh@353 -- # local d=2 00:05:10.292 12:25:15 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.292 12:25:15 -- scripts/common.sh@355 -- # echo 2 00:05:10.292 12:25:15 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.292 12:25:15 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.292 12:25:15 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.292 12:25:15 -- scripts/common.sh@368 -- # return 0 00:05:10.292 12:25:15 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.292 12:25:15 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:10.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.292 --rc genhtml_branch_coverage=1 00:05:10.292 --rc genhtml_function_coverage=1 00:05:10.292 --rc genhtml_legend=1 00:05:10.292 --rc geninfo_all_blocks=1 00:05:10.292 --rc geninfo_unexecuted_blocks=1 00:05:10.292 00:05:10.292 ' 00:05:10.292 12:25:15 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:10.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.292 --rc genhtml_branch_coverage=1 00:05:10.292 --rc genhtml_function_coverage=1 00:05:10.292 --rc genhtml_legend=1 00:05:10.292 --rc geninfo_all_blocks=1 00:05:10.292 --rc geninfo_unexecuted_blocks=1 00:05:10.292 00:05:10.292 ' 00:05:10.292 12:25:15 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:10.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.292 --rc genhtml_branch_coverage=1 00:05:10.292 --rc genhtml_function_coverage=1 00:05:10.292 --rc genhtml_legend=1 00:05:10.292 --rc geninfo_all_blocks=1 00:05:10.292 --rc geninfo_unexecuted_blocks=1 00:05:10.292 00:05:10.292 ' 00:05:10.292 12:25:15 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:10.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.292 --rc genhtml_branch_coverage=1 00:05:10.292 --rc genhtml_function_coverage=1 00:05:10.292 --rc genhtml_legend=1 00:05:10.292 --rc geninfo_all_blocks=1 00:05:10.292 --rc geninfo_unexecuted_blocks=1 00:05:10.292 00:05:10.292 ' 00:05:10.292 12:25:15 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:10.292 12:25:15 -- nvmf/common.sh@7 -- # uname -s 00:05:10.292 12:25:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:10.292 12:25:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:10.292 12:25:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:10.292 12:25:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:10.292 12:25:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:10.292 12:25:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:10.292 12:25:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:10.292 12:25:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:10.292 12:25:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:10.292 12:25:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:10.292 12:25:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f01462e2-3748-4a1e-90b0-ad8a7610ee7d 00:05:10.292 12:25:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=f01462e2-3748-4a1e-90b0-ad8a7610ee7d 00:05:10.292 12:25:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:10.292 12:25:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:10.292 12:25:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:10.292 12:25:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:10.292 12:25:15 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:10.292 12:25:15 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:10.292 12:25:15 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:10.292 12:25:15 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:10.292 12:25:15 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:10.292 12:25:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.292 12:25:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.292 12:25:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.292 12:25:15 -- paths/export.sh@5 -- # export PATH 00:05:10.292 12:25:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.292 12:25:15 -- nvmf/common.sh@51 -- # : 0 00:05:10.292 12:25:15 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:10.292 12:25:15 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:10.292 12:25:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:10.292 12:25:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:10.292 12:25:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:10.292 12:25:15 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:10.292 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:10.292 12:25:15 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:10.292 12:25:15 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:10.292 12:25:15 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:10.292 12:25:15 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:10.292 12:25:15 -- spdk/autotest.sh@32 -- # uname -s 00:05:10.292 12:25:15 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:10.292 12:25:15 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:10.292 12:25:15 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:10.292 12:25:15 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:10.292 12:25:15 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:10.292 12:25:15 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:10.292 12:25:15 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:10.292 12:25:15 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:10.293 12:25:15 -- spdk/autotest.sh@48 -- # udevadm_pid=66896 00:05:10.293 12:25:15 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:10.293 12:25:15 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:10.293 12:25:15 -- pm/common@17 -- # local monitor 00:05:10.293 12:25:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:10.293 12:25:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:10.293 12:25:15 -- pm/common@25 -- # sleep 1 00:05:10.293 12:25:15 -- pm/common@21 -- # date +%s 00:05:10.293 12:25:15 -- pm/common@21 -- # date +%s 00:05:10.293 12:25:15 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732019115 00:05:10.293 12:25:15 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732019115 00:05:10.293 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732019115_collect-cpu-load.pm.log 00:05:10.293 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732019115_collect-vmstat.pm.log 00:05:11.233 12:25:16 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:11.233 12:25:16 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:11.233 12:25:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:11.233 12:25:16 -- common/autotest_common.sh@10 -- # set +x 00:05:11.233 12:25:16 -- spdk/autotest.sh@59 -- # create_test_list 00:05:11.233 12:25:16 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:11.233 12:25:16 -- common/autotest_common.sh@10 -- # set +x 00:05:11.493 12:25:16 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:11.493 12:25:16 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:11.493 12:25:16 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:11.493 12:25:16 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:11.493 12:25:16 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:11.493 12:25:16 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:11.493 12:25:16 -- common/autotest_common.sh@1455 -- # uname 00:05:11.493 12:25:16 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:11.493 12:25:16 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:11.493 12:25:16 -- common/autotest_common.sh@1475 -- # uname 00:05:11.493 12:25:16 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:11.493 12:25:16 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:11.493 12:25:16 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:11.493 lcov: LCOV version 1.15 00:05:11.493 12:25:16 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:26.393 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:26.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:41.343 12:25:44 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:41.343 12:25:44 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:41.343 12:25:44 -- common/autotest_common.sh@10 -- # set +x 00:05:41.343 12:25:44 -- spdk/autotest.sh@78 -- # rm -f 00:05:41.343 12:25:44 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:41.343 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:41.343 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:41.343 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:41.343 12:25:45 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:41.343 12:25:45 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:41.343 12:25:45 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:41.343 12:25:45 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:41.343 12:25:45 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:41.343 12:25:45 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:41.343 12:25:45 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:41.343 12:25:45 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:41.343 12:25:45 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:41.343 12:25:45 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:41.343 12:25:45 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:41.343 12:25:45 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:41.343 12:25:45 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:41.343 12:25:45 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:41.343 12:25:45 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:41.343 12:25:45 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:41.343 12:25:45 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:41.343 12:25:45 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:41.343 12:25:45 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:41.343 12:25:45 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:41.343 12:25:45 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:41.343 12:25:45 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:41.343 12:25:45 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:41.343 12:25:45 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:41.343 12:25:45 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:41.343 12:25:45 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:41.343 12:25:45 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:41.343 12:25:45 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:41.343 12:25:45 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:41.343 12:25:45 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:41.343 No valid GPT data, bailing 00:05:41.343 12:25:45 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:41.343 12:25:45 -- scripts/common.sh@394 -- # pt= 00:05:41.343 12:25:45 -- scripts/common.sh@395 -- # return 1 00:05:41.343 12:25:45 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:41.343 1+0 records in 00:05:41.343 1+0 records out 00:05:41.343 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00526315 s, 199 MB/s 00:05:41.343 12:25:45 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:41.343 12:25:45 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:41.343 12:25:45 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:41.343 12:25:45 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:41.343 12:25:45 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:41.343 No valid GPT data, bailing 00:05:41.343 12:25:45 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:41.343 12:25:45 -- scripts/common.sh@394 -- # pt= 00:05:41.343 12:25:45 -- scripts/common.sh@395 -- # return 1 00:05:41.343 12:25:45 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:41.343 1+0 records in 00:05:41.343 1+0 records out 00:05:41.343 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00702666 s, 149 MB/s 00:05:41.343 12:25:45 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:41.343 12:25:45 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:41.343 12:25:45 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:41.343 12:25:45 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:41.343 12:25:45 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:41.343 No valid GPT data, bailing 00:05:41.344 12:25:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:41.344 12:25:46 -- scripts/common.sh@394 -- # pt= 00:05:41.344 12:25:46 -- scripts/common.sh@395 -- # return 1 00:05:41.344 12:25:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:41.344 1+0 records in 00:05:41.344 1+0 records out 00:05:41.344 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00693389 s, 151 MB/s 00:05:41.344 12:25:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:41.344 12:25:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:41.344 12:25:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:41.344 12:25:46 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:41.344 12:25:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:41.344 No valid GPT data, bailing 00:05:41.344 12:25:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:41.344 12:25:46 -- scripts/common.sh@394 -- # pt= 00:05:41.344 12:25:46 -- scripts/common.sh@395 -- # return 1 00:05:41.344 12:25:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:41.344 1+0 records in 00:05:41.344 1+0 records out 00:05:41.344 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00440559 s, 238 MB/s 00:05:41.344 12:25:46 -- spdk/autotest.sh@105 -- # sync 00:05:41.611 12:25:46 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:41.611 12:25:46 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:41.611 12:25:46 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:44.152 12:25:49 -- spdk/autotest.sh@111 -- # uname -s 00:05:44.152 12:25:49 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:44.152 12:25:49 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:44.152 12:25:49 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:45.092 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:45.092 Hugepages 00:05:45.092 node hugesize free / total 00:05:45.092 node0 1048576kB 0 / 0 00:05:45.092 node0 2048kB 0 / 0 00:05:45.092 00:05:45.092 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:45.092 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:45.092 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:45.352 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:45.352 12:25:50 -- spdk/autotest.sh@117 -- # uname -s 00:05:45.352 12:25:50 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:45.352 12:25:50 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:45.352 12:25:50 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:45.921 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:46.181 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:46.181 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:46.181 12:25:51 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:47.560 12:25:52 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:47.560 12:25:52 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:47.560 12:25:52 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:47.560 12:25:52 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:47.560 12:25:52 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:47.560 12:25:52 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:47.560 12:25:52 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:47.560 12:25:52 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:47.560 12:25:52 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:47.560 12:25:52 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:47.560 12:25:52 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:47.560 12:25:52 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:47.820 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:47.820 Waiting for block devices as requested 00:05:47.820 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:48.080 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:48.080 12:25:53 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:48.080 12:25:53 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:48.080 12:25:53 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:48.080 12:25:53 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:48.080 12:25:53 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:48.080 12:25:53 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:48.080 12:25:53 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:48.080 12:25:53 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:48.080 12:25:53 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:48.080 12:25:53 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:48.080 12:25:53 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:48.080 12:25:53 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:48.080 12:25:53 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:48.080 12:25:53 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:48.080 12:25:53 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:48.080 12:25:53 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:48.080 12:25:53 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:48.080 12:25:53 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:48.080 12:25:53 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:48.080 12:25:53 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:48.080 12:25:53 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:48.080 12:25:53 -- common/autotest_common.sh@1541 -- # continue 00:05:48.080 12:25:53 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:48.080 12:25:53 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:48.080 12:25:53 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:48.080 12:25:53 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:48.080 12:25:53 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:48.080 12:25:53 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:48.080 12:25:53 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:48.080 12:25:53 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:48.080 12:25:53 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:48.080 12:25:53 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:48.080 12:25:53 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:48.080 12:25:53 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:48.080 12:25:53 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:48.081 12:25:53 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:48.081 12:25:53 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:48.081 12:25:53 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:48.081 12:25:53 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:48.081 12:25:53 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:48.081 12:25:53 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:48.341 12:25:53 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:48.341 12:25:53 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:48.341 12:25:53 -- common/autotest_common.sh@1541 -- # continue 00:05:48.341 12:25:53 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:48.341 12:25:53 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:48.341 12:25:53 -- common/autotest_common.sh@10 -- # set +x 00:05:48.341 12:25:53 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:48.341 12:25:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:48.341 12:25:53 -- common/autotest_common.sh@10 -- # set +x 00:05:48.341 12:25:53 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:49.281 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:49.281 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:49.281 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:49.281 12:25:54 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:49.281 12:25:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:49.281 12:25:54 -- common/autotest_common.sh@10 -- # set +x 00:05:49.281 12:25:54 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:49.281 12:25:54 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:49.281 12:25:54 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:49.281 12:25:54 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:49.281 12:25:54 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:49.281 12:25:54 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:49.281 12:25:54 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:49.281 12:25:54 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:49.281 12:25:54 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:49.281 12:25:54 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:49.281 12:25:54 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:49.281 12:25:54 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:49.281 12:25:54 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:49.281 12:25:54 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:49.281 12:25:54 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:49.281 12:25:54 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:49.281 12:25:54 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:49.281 12:25:54 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:49.281 12:25:54 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:49.541 12:25:54 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:49.541 12:25:54 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:49.541 12:25:54 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:49.541 12:25:54 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:49.541 12:25:54 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:49.541 12:25:54 -- common/autotest_common.sh@1570 -- # return 0 00:05:49.541 12:25:54 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:49.541 12:25:54 -- common/autotest_common.sh@1578 -- # return 0 00:05:49.541 12:25:54 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:49.541 12:25:54 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:49.541 12:25:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:49.541 12:25:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:49.541 12:25:54 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:49.541 12:25:54 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:49.541 12:25:54 -- common/autotest_common.sh@10 -- # set +x 00:05:49.541 12:25:54 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:49.541 12:25:54 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:49.541 12:25:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.541 12:25:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.541 12:25:54 -- common/autotest_common.sh@10 -- # set +x 00:05:49.541 ************************************ 00:05:49.541 START TEST env 00:05:49.541 ************************************ 00:05:49.541 12:25:54 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:49.541 * Looking for test storage... 00:05:49.541 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:49.541 12:25:54 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:49.541 12:25:54 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:49.541 12:25:54 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:49.541 12:25:54 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:49.541 12:25:54 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.541 12:25:54 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.541 12:25:54 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.541 12:25:54 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.541 12:25:54 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.541 12:25:54 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.541 12:25:54 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.541 12:25:54 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.541 12:25:54 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.541 12:25:54 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.541 12:25:54 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.541 12:25:54 env -- scripts/common.sh@344 -- # case "$op" in 00:05:49.541 12:25:54 env -- scripts/common.sh@345 -- # : 1 00:05:49.541 12:25:54 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.541 12:25:54 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.541 12:25:54 env -- scripts/common.sh@365 -- # decimal 1 00:05:49.541 12:25:54 env -- scripts/common.sh@353 -- # local d=1 00:05:49.541 12:25:54 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.541 12:25:54 env -- scripts/common.sh@355 -- # echo 1 00:05:49.541 12:25:54 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.541 12:25:54 env -- scripts/common.sh@366 -- # decimal 2 00:05:49.541 12:25:54 env -- scripts/common.sh@353 -- # local d=2 00:05:49.541 12:25:54 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.541 12:25:54 env -- scripts/common.sh@355 -- # echo 2 00:05:49.541 12:25:54 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.541 12:25:54 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.541 12:25:54 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.541 12:25:54 env -- scripts/common.sh@368 -- # return 0 00:05:49.541 12:25:54 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.541 12:25:54 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:49.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.541 --rc genhtml_branch_coverage=1 00:05:49.541 --rc genhtml_function_coverage=1 00:05:49.541 --rc genhtml_legend=1 00:05:49.541 --rc geninfo_all_blocks=1 00:05:49.541 --rc geninfo_unexecuted_blocks=1 00:05:49.541 00:05:49.541 ' 00:05:49.541 12:25:54 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:49.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.541 --rc genhtml_branch_coverage=1 00:05:49.541 --rc genhtml_function_coverage=1 00:05:49.541 --rc genhtml_legend=1 00:05:49.541 --rc geninfo_all_blocks=1 00:05:49.541 --rc geninfo_unexecuted_blocks=1 00:05:49.541 00:05:49.541 ' 00:05:49.541 12:25:54 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:49.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.541 --rc genhtml_branch_coverage=1 00:05:49.541 --rc genhtml_function_coverage=1 00:05:49.541 --rc genhtml_legend=1 00:05:49.541 --rc geninfo_all_blocks=1 00:05:49.541 --rc geninfo_unexecuted_blocks=1 00:05:49.541 00:05:49.541 ' 00:05:49.541 12:25:54 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:49.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.541 --rc genhtml_branch_coverage=1 00:05:49.541 --rc genhtml_function_coverage=1 00:05:49.541 --rc genhtml_legend=1 00:05:49.541 --rc geninfo_all_blocks=1 00:05:49.541 --rc geninfo_unexecuted_blocks=1 00:05:49.541 00:05:49.541 ' 00:05:49.541 12:25:54 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:49.541 12:25:54 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.541 12:25:54 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.541 12:25:54 env -- common/autotest_common.sh@10 -- # set +x 00:05:49.800 ************************************ 00:05:49.800 START TEST env_memory 00:05:49.800 ************************************ 00:05:49.800 12:25:54 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:49.800 00:05:49.800 00:05:49.800 CUnit - A unit testing framework for C - Version 2.1-3 00:05:49.800 http://cunit.sourceforge.net/ 00:05:49.800 00:05:49.800 00:05:49.800 Suite: memory 00:05:49.800 Test: alloc and free memory map ...[2024-11-19 12:25:54.868413] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:49.800 passed 00:05:49.801 Test: mem map translation ...[2024-11-19 12:25:54.909800] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:49.801 [2024-11-19 12:25:54.909872] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:49.801 [2024-11-19 12:25:54.909956] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:49.801 [2024-11-19 12:25:54.909996] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:49.801 passed 00:05:49.801 Test: mem map registration ...[2024-11-19 12:25:54.972852] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:49.801 [2024-11-19 12:25:54.972919] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:49.801 passed 00:05:50.061 Test: mem map adjacent registrations ...passed 00:05:50.061 00:05:50.061 Run Summary: Type Total Ran Passed Failed Inactive 00:05:50.061 suites 1 1 n/a 0 0 00:05:50.061 tests 4 4 4 0 0 00:05:50.061 asserts 152 152 152 0 n/a 00:05:50.061 00:05:50.061 Elapsed time = 0.228 seconds 00:05:50.061 00:05:50.061 real 0m0.276s 00:05:50.061 user 0m0.243s 00:05:50.061 sys 0m0.023s 00:05:50.061 12:25:55 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.061 12:25:55 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:50.061 ************************************ 00:05:50.061 END TEST env_memory 00:05:50.061 ************************************ 00:05:50.061 12:25:55 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:50.061 12:25:55 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.061 12:25:55 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.061 12:25:55 env -- common/autotest_common.sh@10 -- # set +x 00:05:50.061 ************************************ 00:05:50.061 START TEST env_vtophys 00:05:50.061 ************************************ 00:05:50.061 12:25:55 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:50.061 EAL: lib.eal log level changed from notice to debug 00:05:50.061 EAL: Detected lcore 0 as core 0 on socket 0 00:05:50.061 EAL: Detected lcore 1 as core 0 on socket 0 00:05:50.061 EAL: Detected lcore 2 as core 0 on socket 0 00:05:50.061 EAL: Detected lcore 3 as core 0 on socket 0 00:05:50.061 EAL: Detected lcore 4 as core 0 on socket 0 00:05:50.061 EAL: Detected lcore 5 as core 0 on socket 0 00:05:50.061 EAL: Detected lcore 6 as core 0 on socket 0 00:05:50.061 EAL: Detected lcore 7 as core 0 on socket 0 00:05:50.061 EAL: Detected lcore 8 as core 0 on socket 0 00:05:50.061 EAL: Detected lcore 9 as core 0 on socket 0 00:05:50.062 EAL: Maximum logical cores by configuration: 128 00:05:50.062 EAL: Detected CPU lcores: 10 00:05:50.062 EAL: Detected NUMA nodes: 1 00:05:50.062 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:50.062 EAL: Detected shared linkage of DPDK 00:05:50.062 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:50.062 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:50.062 EAL: Registered [vdev] bus. 00:05:50.062 EAL: bus.vdev log level changed from disabled to notice 00:05:50.062 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:50.062 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:50.062 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:50.062 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:50.062 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:50.062 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:50.062 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:50.062 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:50.062 EAL: No shared files mode enabled, IPC will be disabled 00:05:50.062 EAL: No shared files mode enabled, IPC is disabled 00:05:50.062 EAL: Selected IOVA mode 'PA' 00:05:50.062 EAL: Probing VFIO support... 00:05:50.062 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:50.062 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:50.062 EAL: Ask a virtual area of 0x2e000 bytes 00:05:50.062 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:50.062 EAL: Setting up physically contiguous memory... 00:05:50.062 EAL: Setting maximum number of open files to 524288 00:05:50.062 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:50.062 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:50.062 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.062 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:50.062 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:50.062 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.062 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:50.062 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:50.062 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.062 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:50.062 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:50.062 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.062 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:50.062 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:50.062 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.062 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:50.062 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:50.062 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.062 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:50.062 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:50.062 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.062 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:50.062 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:50.062 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.062 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:50.062 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:50.062 EAL: Hugepages will be freed exactly as allocated. 00:05:50.062 EAL: No shared files mode enabled, IPC is disabled 00:05:50.062 EAL: No shared files mode enabled, IPC is disabled 00:05:50.327 EAL: TSC frequency is ~2290000 KHz 00:05:50.327 EAL: Main lcore 0 is ready (tid=7f3ca746ea40;cpuset=[0]) 00:05:50.327 EAL: Trying to obtain current memory policy. 00:05:50.327 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.327 EAL: Restoring previous memory policy: 0 00:05:50.327 EAL: request: mp_malloc_sync 00:05:50.327 EAL: No shared files mode enabled, IPC is disabled 00:05:50.327 EAL: Heap on socket 0 was expanded by 2MB 00:05:50.327 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:50.327 EAL: No shared files mode enabled, IPC is disabled 00:05:50.327 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:50.327 EAL: Mem event callback 'spdk:(nil)' registered 00:05:50.327 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:50.327 00:05:50.327 00:05:50.327 CUnit - A unit testing framework for C - Version 2.1-3 00:05:50.327 http://cunit.sourceforge.net/ 00:05:50.327 00:05:50.327 00:05:50.327 Suite: components_suite 00:05:50.601 Test: vtophys_malloc_test ...passed 00:05:50.601 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:50.601 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.601 EAL: Restoring previous memory policy: 4 00:05:50.601 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.601 EAL: request: mp_malloc_sync 00:05:50.601 EAL: No shared files mode enabled, IPC is disabled 00:05:50.601 EAL: Heap on socket 0 was expanded by 4MB 00:05:50.601 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.601 EAL: request: mp_malloc_sync 00:05:50.601 EAL: No shared files mode enabled, IPC is disabled 00:05:50.601 EAL: Heap on socket 0 was shrunk by 4MB 00:05:50.601 EAL: Trying to obtain current memory policy. 00:05:50.601 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.601 EAL: Restoring previous memory policy: 4 00:05:50.601 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.601 EAL: request: mp_malloc_sync 00:05:50.601 EAL: No shared files mode enabled, IPC is disabled 00:05:50.601 EAL: Heap on socket 0 was expanded by 6MB 00:05:50.601 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.602 EAL: request: mp_malloc_sync 00:05:50.602 EAL: No shared files mode enabled, IPC is disabled 00:05:50.602 EAL: Heap on socket 0 was shrunk by 6MB 00:05:50.602 EAL: Trying to obtain current memory policy. 00:05:50.602 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.602 EAL: Restoring previous memory policy: 4 00:05:50.602 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.602 EAL: request: mp_malloc_sync 00:05:50.602 EAL: No shared files mode enabled, IPC is disabled 00:05:50.602 EAL: Heap on socket 0 was expanded by 10MB 00:05:50.602 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.602 EAL: request: mp_malloc_sync 00:05:50.602 EAL: No shared files mode enabled, IPC is disabled 00:05:50.602 EAL: Heap on socket 0 was shrunk by 10MB 00:05:50.602 EAL: Trying to obtain current memory policy. 00:05:50.602 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.602 EAL: Restoring previous memory policy: 4 00:05:50.602 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.602 EAL: request: mp_malloc_sync 00:05:50.602 EAL: No shared files mode enabled, IPC is disabled 00:05:50.602 EAL: Heap on socket 0 was expanded by 18MB 00:05:50.602 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.602 EAL: request: mp_malloc_sync 00:05:50.602 EAL: No shared files mode enabled, IPC is disabled 00:05:50.602 EAL: Heap on socket 0 was shrunk by 18MB 00:05:50.602 EAL: Trying to obtain current memory policy. 00:05:50.602 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.602 EAL: Restoring previous memory policy: 4 00:05:50.602 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.602 EAL: request: mp_malloc_sync 00:05:50.602 EAL: No shared files mode enabled, IPC is disabled 00:05:50.602 EAL: Heap on socket 0 was expanded by 34MB 00:05:50.602 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.602 EAL: request: mp_malloc_sync 00:05:50.602 EAL: No shared files mode enabled, IPC is disabled 00:05:50.602 EAL: Heap on socket 0 was shrunk by 34MB 00:05:50.602 EAL: Trying to obtain current memory policy. 00:05:50.602 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.602 EAL: Restoring previous memory policy: 4 00:05:50.602 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.602 EAL: request: mp_malloc_sync 00:05:50.602 EAL: No shared files mode enabled, IPC is disabled 00:05:50.602 EAL: Heap on socket 0 was expanded by 66MB 00:05:50.602 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.602 EAL: request: mp_malloc_sync 00:05:50.602 EAL: No shared files mode enabled, IPC is disabled 00:05:50.602 EAL: Heap on socket 0 was shrunk by 66MB 00:05:50.602 EAL: Trying to obtain current memory policy. 00:05:50.602 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.602 EAL: Restoring previous memory policy: 4 00:05:50.602 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.602 EAL: request: mp_malloc_sync 00:05:50.602 EAL: No shared files mode enabled, IPC is disabled 00:05:50.602 EAL: Heap on socket 0 was expanded by 130MB 00:05:50.602 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.602 EAL: request: mp_malloc_sync 00:05:50.602 EAL: No shared files mode enabled, IPC is disabled 00:05:50.602 EAL: Heap on socket 0 was shrunk by 130MB 00:05:50.602 EAL: Trying to obtain current memory policy. 00:05:50.602 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.602 EAL: Restoring previous memory policy: 4 00:05:50.602 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.602 EAL: request: mp_malloc_sync 00:05:50.602 EAL: No shared files mode enabled, IPC is disabled 00:05:50.602 EAL: Heap on socket 0 was expanded by 258MB 00:05:50.871 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.871 EAL: request: mp_malloc_sync 00:05:50.871 EAL: No shared files mode enabled, IPC is disabled 00:05:50.871 EAL: Heap on socket 0 was shrunk by 258MB 00:05:50.871 EAL: Trying to obtain current memory policy. 00:05:50.871 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.871 EAL: Restoring previous memory policy: 4 00:05:50.871 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.871 EAL: request: mp_malloc_sync 00:05:50.871 EAL: No shared files mode enabled, IPC is disabled 00:05:50.871 EAL: Heap on socket 0 was expanded by 514MB 00:05:50.871 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.130 EAL: request: mp_malloc_sync 00:05:51.130 EAL: No shared files mode enabled, IPC is disabled 00:05:51.130 EAL: Heap on socket 0 was shrunk by 514MB 00:05:51.130 EAL: Trying to obtain current memory policy. 00:05:51.130 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.390 EAL: Restoring previous memory policy: 4 00:05:51.390 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.390 EAL: request: mp_malloc_sync 00:05:51.390 EAL: No shared files mode enabled, IPC is disabled 00:05:51.390 EAL: Heap on socket 0 was expanded by 1026MB 00:05:51.390 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.649 EAL: request: mp_malloc_sync 00:05:51.649 EAL: No shared files mode enabled, IPC is disabled 00:05:51.649 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:51.649 passed 00:05:51.649 00:05:51.649 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.649 suites 1 1 n/a 0 0 00:05:51.649 tests 2 2 2 0 0 00:05:51.649 asserts 5358 5358 5358 0 n/a 00:05:51.649 00:05:51.649 Elapsed time = 1.339 seconds 00:05:51.649 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.649 EAL: request: mp_malloc_sync 00:05:51.649 EAL: No shared files mode enabled, IPC is disabled 00:05:51.649 EAL: Heap on socket 0 was shrunk by 2MB 00:05:51.649 EAL: No shared files mode enabled, IPC is disabled 00:05:51.649 EAL: No shared files mode enabled, IPC is disabled 00:05:51.649 EAL: No shared files mode enabled, IPC is disabled 00:05:51.649 00:05:51.649 real 0m1.604s 00:05:51.649 user 0m0.761s 00:05:51.649 sys 0m0.706s 00:05:51.649 12:25:56 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.649 12:25:56 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:51.649 ************************************ 00:05:51.649 END TEST env_vtophys 00:05:51.649 ************************************ 00:05:51.649 12:25:56 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:51.649 12:25:56 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.649 12:25:56 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.649 12:25:56 env -- common/autotest_common.sh@10 -- # set +x 00:05:51.649 ************************************ 00:05:51.649 START TEST env_pci 00:05:51.649 ************************************ 00:05:51.649 12:25:56 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:51.649 00:05:51.649 00:05:51.649 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.649 http://cunit.sourceforge.net/ 00:05:51.649 00:05:51.649 00:05:51.649 Suite: pci 00:05:51.649 Test: pci_hook ...[2024-11-19 12:25:56.848328] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69135 has claimed it 00:05:51.649 passed 00:05:51.649 00:05:51.649 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.649 suites 1 1 n/a 0 0 00:05:51.649 tests 1 1 1 0 0 00:05:51.649 asserts 25 25 25 0 n/a 00:05:51.649 00:05:51.649 Elapsed time = 0.006 seconds 00:05:51.649 EAL: Cannot find device (10000:00:01.0) 00:05:51.649 EAL: Failed to attach device on primary process 00:05:51.910 00:05:51.910 real 0m0.094s 00:05:51.910 user 0m0.045s 00:05:51.910 sys 0m0.048s 00:05:51.910 12:25:56 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.910 ************************************ 00:05:51.910 END TEST env_pci 00:05:51.910 ************************************ 00:05:51.910 12:25:56 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:51.910 12:25:56 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:51.910 12:25:56 env -- env/env.sh@15 -- # uname 00:05:51.910 12:25:56 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:51.910 12:25:56 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:51.910 12:25:56 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:51.910 12:25:56 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:51.910 12:25:56 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.910 12:25:56 env -- common/autotest_common.sh@10 -- # set +x 00:05:51.910 ************************************ 00:05:51.910 START TEST env_dpdk_post_init 00:05:51.910 ************************************ 00:05:51.910 12:25:56 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:51.910 EAL: Detected CPU lcores: 10 00:05:51.910 EAL: Detected NUMA nodes: 1 00:05:51.910 EAL: Detected shared linkage of DPDK 00:05:51.910 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:51.910 EAL: Selected IOVA mode 'PA' 00:05:51.910 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:52.170 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:52.170 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:52.170 Starting DPDK initialization... 00:05:52.170 Starting SPDK post initialization... 00:05:52.170 SPDK NVMe probe 00:05:52.170 Attaching to 0000:00:10.0 00:05:52.170 Attaching to 0000:00:11.0 00:05:52.170 Attached to 0000:00:10.0 00:05:52.170 Attached to 0000:00:11.0 00:05:52.170 Cleaning up... 00:05:52.170 ************************************ 00:05:52.170 END TEST env_dpdk_post_init 00:05:52.170 ************************************ 00:05:52.170 00:05:52.170 real 0m0.274s 00:05:52.170 user 0m0.085s 00:05:52.170 sys 0m0.088s 00:05:52.170 12:25:57 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.170 12:25:57 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:52.170 12:25:57 env -- env/env.sh@26 -- # uname 00:05:52.170 12:25:57 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:52.170 12:25:57 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:52.170 12:25:57 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.170 12:25:57 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.170 12:25:57 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.170 ************************************ 00:05:52.170 START TEST env_mem_callbacks 00:05:52.170 ************************************ 00:05:52.170 12:25:57 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:52.170 EAL: Detected CPU lcores: 10 00:05:52.170 EAL: Detected NUMA nodes: 1 00:05:52.170 EAL: Detected shared linkage of DPDK 00:05:52.170 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:52.170 EAL: Selected IOVA mode 'PA' 00:05:52.430 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:52.430 00:05:52.430 00:05:52.430 CUnit - A unit testing framework for C - Version 2.1-3 00:05:52.430 http://cunit.sourceforge.net/ 00:05:52.430 00:05:52.430 00:05:52.430 Suite: memory 00:05:52.430 Test: test ... 00:05:52.430 register 0x200000200000 2097152 00:05:52.430 malloc 3145728 00:05:52.430 register 0x200000400000 4194304 00:05:52.430 buf 0x200000500000 len 3145728 PASSED 00:05:52.430 malloc 64 00:05:52.430 buf 0x2000004fff40 len 64 PASSED 00:05:52.430 malloc 4194304 00:05:52.430 register 0x200000800000 6291456 00:05:52.430 buf 0x200000a00000 len 4194304 PASSED 00:05:52.430 free 0x200000500000 3145728 00:05:52.430 free 0x2000004fff40 64 00:05:52.430 unregister 0x200000400000 4194304 PASSED 00:05:52.430 free 0x200000a00000 4194304 00:05:52.430 unregister 0x200000800000 6291456 PASSED 00:05:52.430 malloc 8388608 00:05:52.430 register 0x200000400000 10485760 00:05:52.430 buf 0x200000600000 len 8388608 PASSED 00:05:52.430 free 0x200000600000 8388608 00:05:52.430 unregister 0x200000400000 10485760 PASSED 00:05:52.430 passed 00:05:52.430 00:05:52.430 Run Summary: Type Total Ran Passed Failed Inactive 00:05:52.430 suites 1 1 n/a 0 0 00:05:52.430 tests 1 1 1 0 0 00:05:52.430 asserts 15 15 15 0 n/a 00:05:52.430 00:05:52.430 Elapsed time = 0.012 seconds 00:05:52.430 00:05:52.430 real 0m0.203s 00:05:52.430 user 0m0.037s 00:05:52.430 sys 0m0.064s 00:05:52.430 12:25:57 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.430 12:25:57 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:52.430 ************************************ 00:05:52.430 END TEST env_mem_callbacks 00:05:52.430 ************************************ 00:05:52.430 00:05:52.430 real 0m3.015s 00:05:52.430 user 0m1.410s 00:05:52.430 sys 0m1.263s 00:05:52.430 12:25:57 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.430 12:25:57 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.430 ************************************ 00:05:52.430 END TEST env 00:05:52.430 ************************************ 00:05:52.430 12:25:57 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:52.430 12:25:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.430 12:25:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.430 12:25:57 -- common/autotest_common.sh@10 -- # set +x 00:05:52.430 ************************************ 00:05:52.430 START TEST rpc 00:05:52.430 ************************************ 00:05:52.430 12:25:57 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:52.691 * Looking for test storage... 00:05:52.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:52.691 12:25:57 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:52.691 12:25:57 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:52.691 12:25:57 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:52.691 12:25:57 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:52.691 12:25:57 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.691 12:25:57 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.691 12:25:57 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.691 12:25:57 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.691 12:25:57 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.691 12:25:57 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.691 12:25:57 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.691 12:25:57 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.691 12:25:57 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.691 12:25:57 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.691 12:25:57 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.691 12:25:57 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:52.691 12:25:57 rpc -- scripts/common.sh@345 -- # : 1 00:05:52.691 12:25:57 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.691 12:25:57 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.691 12:25:57 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:52.691 12:25:57 rpc -- scripts/common.sh@353 -- # local d=1 00:05:52.691 12:25:57 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.691 12:25:57 rpc -- scripts/common.sh@355 -- # echo 1 00:05:52.691 12:25:57 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.691 12:25:57 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:52.691 12:25:57 rpc -- scripts/common.sh@353 -- # local d=2 00:05:52.691 12:25:57 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.691 12:25:57 rpc -- scripts/common.sh@355 -- # echo 2 00:05:52.691 12:25:57 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.691 12:25:57 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.691 12:25:57 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.691 12:25:57 rpc -- scripts/common.sh@368 -- # return 0 00:05:52.691 12:25:57 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.691 12:25:57 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:52.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.691 --rc genhtml_branch_coverage=1 00:05:52.691 --rc genhtml_function_coverage=1 00:05:52.691 --rc genhtml_legend=1 00:05:52.691 --rc geninfo_all_blocks=1 00:05:52.691 --rc geninfo_unexecuted_blocks=1 00:05:52.691 00:05:52.691 ' 00:05:52.691 12:25:57 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:52.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.691 --rc genhtml_branch_coverage=1 00:05:52.691 --rc genhtml_function_coverage=1 00:05:52.691 --rc genhtml_legend=1 00:05:52.691 --rc geninfo_all_blocks=1 00:05:52.691 --rc geninfo_unexecuted_blocks=1 00:05:52.691 00:05:52.691 ' 00:05:52.691 12:25:57 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:52.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.691 --rc genhtml_branch_coverage=1 00:05:52.691 --rc genhtml_function_coverage=1 00:05:52.691 --rc genhtml_legend=1 00:05:52.691 --rc geninfo_all_blocks=1 00:05:52.691 --rc geninfo_unexecuted_blocks=1 00:05:52.691 00:05:52.691 ' 00:05:52.691 12:25:57 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:52.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.691 --rc genhtml_branch_coverage=1 00:05:52.691 --rc genhtml_function_coverage=1 00:05:52.691 --rc genhtml_legend=1 00:05:52.691 --rc geninfo_all_blocks=1 00:05:52.691 --rc geninfo_unexecuted_blocks=1 00:05:52.691 00:05:52.691 ' 00:05:52.691 12:25:57 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:52.691 12:25:57 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69262 00:05:52.691 12:25:57 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.691 12:25:57 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69262 00:05:52.691 12:25:57 rpc -- common/autotest_common.sh@831 -- # '[' -z 69262 ']' 00:05:52.691 12:25:57 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.691 12:25:57 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.691 12:25:57 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.691 12:25:57 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.691 12:25:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.951 [2024-11-19 12:25:57.990801] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:52.951 [2024-11-19 12:25:57.991005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69262 ] 00:05:52.951 [2024-11-19 12:25:58.152081] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.951 [2024-11-19 12:25:58.199811] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:52.951 [2024-11-19 12:25:58.199940] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69262' to capture a snapshot of events at runtime. 00:05:52.951 [2024-11-19 12:25:58.199997] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:52.951 [2024-11-19 12:25:58.200028] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:52.951 [2024-11-19 12:25:58.200052] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69262 for offline analysis/debug. 00:05:52.951 [2024-11-19 12:25:58.200117] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.889 12:25:58 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.889 12:25:58 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:53.889 12:25:58 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:53.889 12:25:58 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:53.889 12:25:58 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:53.889 12:25:58 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:53.889 12:25:58 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.889 12:25:58 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.889 12:25:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.889 ************************************ 00:05:53.889 START TEST rpc_integrity 00:05:53.889 ************************************ 00:05:53.889 12:25:58 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:53.889 12:25:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:53.889 12:25:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.889 12:25:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.889 12:25:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.889 12:25:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:53.889 12:25:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:53.889 12:25:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:53.889 12:25:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:53.889 12:25:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.889 12:25:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.889 12:25:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.889 12:25:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:53.889 12:25:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:53.889 12:25:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.889 12:25:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.889 12:25:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.889 12:25:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:53.889 { 00:05:53.889 "name": "Malloc0", 00:05:53.889 "aliases": [ 00:05:53.889 "dfe57b10-c989-4556-b2b9-d8dc8980718e" 00:05:53.889 ], 00:05:53.889 "product_name": "Malloc disk", 00:05:53.889 "block_size": 512, 00:05:53.889 "num_blocks": 16384, 00:05:53.889 "uuid": "dfe57b10-c989-4556-b2b9-d8dc8980718e", 00:05:53.889 "assigned_rate_limits": { 00:05:53.889 "rw_ios_per_sec": 0, 00:05:53.889 "rw_mbytes_per_sec": 0, 00:05:53.889 "r_mbytes_per_sec": 0, 00:05:53.889 "w_mbytes_per_sec": 0 00:05:53.889 }, 00:05:53.889 "claimed": false, 00:05:53.889 "zoned": false, 00:05:53.889 "supported_io_types": { 00:05:53.889 "read": true, 00:05:53.889 "write": true, 00:05:53.889 "unmap": true, 00:05:53.889 "flush": true, 00:05:53.889 "reset": true, 00:05:53.889 "nvme_admin": false, 00:05:53.889 "nvme_io": false, 00:05:53.889 "nvme_io_md": false, 00:05:53.889 "write_zeroes": true, 00:05:53.889 "zcopy": true, 00:05:53.889 "get_zone_info": false, 00:05:53.889 "zone_management": false, 00:05:53.889 "zone_append": false, 00:05:53.889 "compare": false, 00:05:53.889 "compare_and_write": false, 00:05:53.889 "abort": true, 00:05:53.889 "seek_hole": false, 00:05:53.889 "seek_data": false, 00:05:53.889 "copy": true, 00:05:53.889 "nvme_iov_md": false 00:05:53.889 }, 00:05:53.889 "memory_domains": [ 00:05:53.889 { 00:05:53.889 "dma_device_id": "system", 00:05:53.889 "dma_device_type": 1 00:05:53.889 }, 00:05:53.889 { 00:05:53.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:53.889 "dma_device_type": 2 00:05:53.889 } 00:05:53.889 ], 00:05:53.889 "driver_specific": {} 00:05:53.889 } 00:05:53.889 ]' 00:05:53.889 12:25:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:53.889 12:25:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:53.889 12:25:58 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:53.889 12:25:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.889 12:25:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.889 [2024-11-19 12:25:59.002811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:53.889 [2024-11-19 12:25:59.002875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:53.889 [2024-11-19 12:25:59.002929] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:53.889 [2024-11-19 12:25:59.002950] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:53.889 [2024-11-19 12:25:59.005153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:53.889 [2024-11-19 12:25:59.005230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:53.889 Passthru0 00:05:53.889 12:25:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.889 12:25:59 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:53.889 12:25:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.889 12:25:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.889 12:25:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.889 12:25:59 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:53.889 { 00:05:53.889 "name": "Malloc0", 00:05:53.889 "aliases": [ 00:05:53.889 "dfe57b10-c989-4556-b2b9-d8dc8980718e" 00:05:53.889 ], 00:05:53.889 "product_name": "Malloc disk", 00:05:53.889 "block_size": 512, 00:05:53.889 "num_blocks": 16384, 00:05:53.889 "uuid": "dfe57b10-c989-4556-b2b9-d8dc8980718e", 00:05:53.889 "assigned_rate_limits": { 00:05:53.889 "rw_ios_per_sec": 0, 00:05:53.889 "rw_mbytes_per_sec": 0, 00:05:53.889 "r_mbytes_per_sec": 0, 00:05:53.889 "w_mbytes_per_sec": 0 00:05:53.889 }, 00:05:53.889 "claimed": true, 00:05:53.889 "claim_type": "exclusive_write", 00:05:53.889 "zoned": false, 00:05:53.889 "supported_io_types": { 00:05:53.889 "read": true, 00:05:53.890 "write": true, 00:05:53.890 "unmap": true, 00:05:53.890 "flush": true, 00:05:53.890 "reset": true, 00:05:53.890 "nvme_admin": false, 00:05:53.890 "nvme_io": false, 00:05:53.890 "nvme_io_md": false, 00:05:53.890 "write_zeroes": true, 00:05:53.890 "zcopy": true, 00:05:53.890 "get_zone_info": false, 00:05:53.890 "zone_management": false, 00:05:53.890 "zone_append": false, 00:05:53.890 "compare": false, 00:05:53.890 "compare_and_write": false, 00:05:53.890 "abort": true, 00:05:53.890 "seek_hole": false, 00:05:53.890 "seek_data": false, 00:05:53.890 "copy": true, 00:05:53.890 "nvme_iov_md": false 00:05:53.890 }, 00:05:53.890 "memory_domains": [ 00:05:53.890 { 00:05:53.890 "dma_device_id": "system", 00:05:53.890 "dma_device_type": 1 00:05:53.890 }, 00:05:53.890 { 00:05:53.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:53.890 "dma_device_type": 2 00:05:53.890 } 00:05:53.890 ], 00:05:53.890 "driver_specific": {} 00:05:53.890 }, 00:05:53.890 { 00:05:53.890 "name": "Passthru0", 00:05:53.890 "aliases": [ 00:05:53.890 "3333293d-c03a-5b5d-bf8e-761175853b02" 00:05:53.890 ], 00:05:53.890 "product_name": "passthru", 00:05:53.890 "block_size": 512, 00:05:53.890 "num_blocks": 16384, 00:05:53.890 "uuid": "3333293d-c03a-5b5d-bf8e-761175853b02", 00:05:53.890 "assigned_rate_limits": { 00:05:53.890 "rw_ios_per_sec": 0, 00:05:53.890 "rw_mbytes_per_sec": 0, 00:05:53.890 "r_mbytes_per_sec": 0, 00:05:53.890 "w_mbytes_per_sec": 0 00:05:53.890 }, 00:05:53.890 "claimed": false, 00:05:53.890 "zoned": false, 00:05:53.890 "supported_io_types": { 00:05:53.890 "read": true, 00:05:53.890 "write": true, 00:05:53.890 "unmap": true, 00:05:53.890 "flush": true, 00:05:53.890 "reset": true, 00:05:53.890 "nvme_admin": false, 00:05:53.890 "nvme_io": false, 00:05:53.890 "nvme_io_md": false, 00:05:53.890 "write_zeroes": true, 00:05:53.890 "zcopy": true, 00:05:53.890 "get_zone_info": false, 00:05:53.890 "zone_management": false, 00:05:53.890 "zone_append": false, 00:05:53.890 "compare": false, 00:05:53.890 "compare_and_write": false, 00:05:53.890 "abort": true, 00:05:53.890 "seek_hole": false, 00:05:53.890 "seek_data": false, 00:05:53.890 "copy": true, 00:05:53.890 "nvme_iov_md": false 00:05:53.890 }, 00:05:53.890 "memory_domains": [ 00:05:53.890 { 00:05:53.890 "dma_device_id": "system", 00:05:53.890 "dma_device_type": 1 00:05:53.890 }, 00:05:53.890 { 00:05:53.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:53.890 "dma_device_type": 2 00:05:53.890 } 00:05:53.890 ], 00:05:53.890 "driver_specific": { 00:05:53.890 "passthru": { 00:05:53.890 "name": "Passthru0", 00:05:53.890 "base_bdev_name": "Malloc0" 00:05:53.890 } 00:05:53.890 } 00:05:53.890 } 00:05:53.890 ]' 00:05:53.890 12:25:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:53.890 12:25:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:53.890 12:25:59 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:53.890 12:25:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.890 12:25:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.890 12:25:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.890 12:25:59 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:53.890 12:25:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.890 12:25:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.890 12:25:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.890 12:25:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:53.890 12:25:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.890 12:25:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.890 12:25:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.890 12:25:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:53.890 12:25:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:54.149 ************************************ 00:05:54.149 END TEST rpc_integrity 00:05:54.149 ************************************ 00:05:54.149 12:25:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:54.149 00:05:54.149 real 0m0.329s 00:05:54.149 user 0m0.192s 00:05:54.149 sys 0m0.059s 00:05:54.149 12:25:59 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.149 12:25:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.149 12:25:59 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:54.149 12:25:59 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.149 12:25:59 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.149 12:25:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.149 ************************************ 00:05:54.149 START TEST rpc_plugins 00:05:54.149 ************************************ 00:05:54.149 12:25:59 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:54.149 12:25:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:54.149 12:25:59 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.149 12:25:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.149 12:25:59 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.149 12:25:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:54.150 12:25:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:54.150 12:25:59 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.150 12:25:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.150 12:25:59 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.150 12:25:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:54.150 { 00:05:54.150 "name": "Malloc1", 00:05:54.150 "aliases": [ 00:05:54.150 "a9b8c3f1-d0f1-4e0c-a523-4044e906b179" 00:05:54.150 ], 00:05:54.150 "product_name": "Malloc disk", 00:05:54.150 "block_size": 4096, 00:05:54.150 "num_blocks": 256, 00:05:54.150 "uuid": "a9b8c3f1-d0f1-4e0c-a523-4044e906b179", 00:05:54.150 "assigned_rate_limits": { 00:05:54.150 "rw_ios_per_sec": 0, 00:05:54.150 "rw_mbytes_per_sec": 0, 00:05:54.150 "r_mbytes_per_sec": 0, 00:05:54.150 "w_mbytes_per_sec": 0 00:05:54.150 }, 00:05:54.150 "claimed": false, 00:05:54.150 "zoned": false, 00:05:54.150 "supported_io_types": { 00:05:54.150 "read": true, 00:05:54.150 "write": true, 00:05:54.150 "unmap": true, 00:05:54.150 "flush": true, 00:05:54.150 "reset": true, 00:05:54.150 "nvme_admin": false, 00:05:54.150 "nvme_io": false, 00:05:54.150 "nvme_io_md": false, 00:05:54.150 "write_zeroes": true, 00:05:54.150 "zcopy": true, 00:05:54.150 "get_zone_info": false, 00:05:54.150 "zone_management": false, 00:05:54.150 "zone_append": false, 00:05:54.150 "compare": false, 00:05:54.150 "compare_and_write": false, 00:05:54.150 "abort": true, 00:05:54.150 "seek_hole": false, 00:05:54.150 "seek_data": false, 00:05:54.150 "copy": true, 00:05:54.150 "nvme_iov_md": false 00:05:54.150 }, 00:05:54.150 "memory_domains": [ 00:05:54.150 { 00:05:54.150 "dma_device_id": "system", 00:05:54.150 "dma_device_type": 1 00:05:54.150 }, 00:05:54.150 { 00:05:54.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.150 "dma_device_type": 2 00:05:54.150 } 00:05:54.150 ], 00:05:54.150 "driver_specific": {} 00:05:54.150 } 00:05:54.150 ]' 00:05:54.150 12:25:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:54.150 12:25:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:54.150 12:25:59 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:54.150 12:25:59 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.150 12:25:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.150 12:25:59 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.150 12:25:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:54.150 12:25:59 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.150 12:25:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.150 12:25:59 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.150 12:25:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:54.150 12:25:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:54.410 ************************************ 00:05:54.410 END TEST rpc_plugins 00:05:54.410 ************************************ 00:05:54.410 12:25:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:54.410 00:05:54.410 real 0m0.166s 00:05:54.410 user 0m0.097s 00:05:54.410 sys 0m0.029s 00:05:54.410 12:25:59 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.410 12:25:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.410 12:25:59 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:54.410 12:25:59 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.410 12:25:59 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.410 12:25:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.410 ************************************ 00:05:54.410 START TEST rpc_trace_cmd_test 00:05:54.410 ************************************ 00:05:54.410 12:25:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:54.410 12:25:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:54.410 12:25:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:54.410 12:25:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.410 12:25:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:54.410 12:25:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.410 12:25:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:54.410 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69262", 00:05:54.410 "tpoint_group_mask": "0x8", 00:05:54.410 "iscsi_conn": { 00:05:54.410 "mask": "0x2", 00:05:54.410 "tpoint_mask": "0x0" 00:05:54.410 }, 00:05:54.410 "scsi": { 00:05:54.410 "mask": "0x4", 00:05:54.410 "tpoint_mask": "0x0" 00:05:54.410 }, 00:05:54.410 "bdev": { 00:05:54.410 "mask": "0x8", 00:05:54.410 "tpoint_mask": "0xffffffffffffffff" 00:05:54.410 }, 00:05:54.410 "nvmf_rdma": { 00:05:54.410 "mask": "0x10", 00:05:54.410 "tpoint_mask": "0x0" 00:05:54.410 }, 00:05:54.410 "nvmf_tcp": { 00:05:54.410 "mask": "0x20", 00:05:54.410 "tpoint_mask": "0x0" 00:05:54.410 }, 00:05:54.410 "ftl": { 00:05:54.410 "mask": "0x40", 00:05:54.410 "tpoint_mask": "0x0" 00:05:54.410 }, 00:05:54.410 "blobfs": { 00:05:54.410 "mask": "0x80", 00:05:54.410 "tpoint_mask": "0x0" 00:05:54.410 }, 00:05:54.410 "dsa": { 00:05:54.410 "mask": "0x200", 00:05:54.410 "tpoint_mask": "0x0" 00:05:54.410 }, 00:05:54.410 "thread": { 00:05:54.410 "mask": "0x400", 00:05:54.410 "tpoint_mask": "0x0" 00:05:54.410 }, 00:05:54.410 "nvme_pcie": { 00:05:54.410 "mask": "0x800", 00:05:54.410 "tpoint_mask": "0x0" 00:05:54.410 }, 00:05:54.410 "iaa": { 00:05:54.410 "mask": "0x1000", 00:05:54.410 "tpoint_mask": "0x0" 00:05:54.410 }, 00:05:54.410 "nvme_tcp": { 00:05:54.410 "mask": "0x2000", 00:05:54.410 "tpoint_mask": "0x0" 00:05:54.410 }, 00:05:54.410 "bdev_nvme": { 00:05:54.410 "mask": "0x4000", 00:05:54.410 "tpoint_mask": "0x0" 00:05:54.410 }, 00:05:54.410 "sock": { 00:05:54.410 "mask": "0x8000", 00:05:54.410 "tpoint_mask": "0x0" 00:05:54.410 }, 00:05:54.410 "blob": { 00:05:54.410 "mask": "0x10000", 00:05:54.410 "tpoint_mask": "0x0" 00:05:54.410 }, 00:05:54.410 "bdev_raid": { 00:05:54.410 "mask": "0x20000", 00:05:54.410 "tpoint_mask": "0x0" 00:05:54.410 } 00:05:54.410 }' 00:05:54.410 12:25:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:54.410 12:25:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:54.410 12:25:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:54.410 12:25:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:54.410 12:25:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:54.410 12:25:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:54.410 12:25:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:54.410 12:25:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:54.410 12:25:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:54.670 ************************************ 00:05:54.670 END TEST rpc_trace_cmd_test 00:05:54.670 ************************************ 00:05:54.670 12:25:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:54.670 00:05:54.670 real 0m0.243s 00:05:54.670 user 0m0.191s 00:05:54.670 sys 0m0.039s 00:05:54.670 12:25:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.670 12:25:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:54.670 12:25:59 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:54.670 12:25:59 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:54.670 12:25:59 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:54.670 12:25:59 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.670 12:25:59 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.670 12:25:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.670 ************************************ 00:05:54.670 START TEST rpc_daemon_integrity 00:05:54.670 ************************************ 00:05:54.670 12:25:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:54.670 12:25:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:54.670 12:25:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.670 12:25:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.670 12:25:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.670 12:25:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:54.670 12:25:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:54.670 12:25:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:54.670 12:25:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:54.670 12:25:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.670 12:25:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.670 12:25:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.670 12:25:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:54.670 12:25:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:54.670 12:25:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.670 12:25:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.670 12:25:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.670 12:25:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:54.670 { 00:05:54.670 "name": "Malloc2", 00:05:54.670 "aliases": [ 00:05:54.670 "370cdcc3-43f1-4e7a-a8db-0b4170e2346e" 00:05:54.670 ], 00:05:54.670 "product_name": "Malloc disk", 00:05:54.670 "block_size": 512, 00:05:54.670 "num_blocks": 16384, 00:05:54.670 "uuid": "370cdcc3-43f1-4e7a-a8db-0b4170e2346e", 00:05:54.670 "assigned_rate_limits": { 00:05:54.670 "rw_ios_per_sec": 0, 00:05:54.670 "rw_mbytes_per_sec": 0, 00:05:54.670 "r_mbytes_per_sec": 0, 00:05:54.670 "w_mbytes_per_sec": 0 00:05:54.670 }, 00:05:54.670 "claimed": false, 00:05:54.670 "zoned": false, 00:05:54.670 "supported_io_types": { 00:05:54.670 "read": true, 00:05:54.670 "write": true, 00:05:54.670 "unmap": true, 00:05:54.670 "flush": true, 00:05:54.670 "reset": true, 00:05:54.670 "nvme_admin": false, 00:05:54.670 "nvme_io": false, 00:05:54.670 "nvme_io_md": false, 00:05:54.670 "write_zeroes": true, 00:05:54.670 "zcopy": true, 00:05:54.670 "get_zone_info": false, 00:05:54.670 "zone_management": false, 00:05:54.670 "zone_append": false, 00:05:54.670 "compare": false, 00:05:54.670 "compare_and_write": false, 00:05:54.670 "abort": true, 00:05:54.670 "seek_hole": false, 00:05:54.670 "seek_data": false, 00:05:54.670 "copy": true, 00:05:54.670 "nvme_iov_md": false 00:05:54.670 }, 00:05:54.670 "memory_domains": [ 00:05:54.670 { 00:05:54.670 "dma_device_id": "system", 00:05:54.670 "dma_device_type": 1 00:05:54.670 }, 00:05:54.670 { 00:05:54.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.670 "dma_device_type": 2 00:05:54.670 } 00:05:54.670 ], 00:05:54.671 "driver_specific": {} 00:05:54.671 } 00:05:54.671 ]' 00:05:54.671 12:25:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:54.671 12:25:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:54.671 12:25:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:54.671 12:25:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.671 12:25:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.671 [2024-11-19 12:25:59.922872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:54.671 [2024-11-19 12:25:59.922934] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:54.671 [2024-11-19 12:25:59.922958] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:54.671 [2024-11-19 12:25:59.922967] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:54.671 [2024-11-19 12:25:59.925446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:54.671 [2024-11-19 12:25:59.925483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:54.930 Passthru0 00:05:54.930 12:25:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.930 12:25:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:54.930 12:25:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.931 12:25:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.931 12:25:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.931 12:25:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:54.931 { 00:05:54.931 "name": "Malloc2", 00:05:54.931 "aliases": [ 00:05:54.931 "370cdcc3-43f1-4e7a-a8db-0b4170e2346e" 00:05:54.931 ], 00:05:54.931 "product_name": "Malloc disk", 00:05:54.931 "block_size": 512, 00:05:54.931 "num_blocks": 16384, 00:05:54.931 "uuid": "370cdcc3-43f1-4e7a-a8db-0b4170e2346e", 00:05:54.931 "assigned_rate_limits": { 00:05:54.931 "rw_ios_per_sec": 0, 00:05:54.931 "rw_mbytes_per_sec": 0, 00:05:54.931 "r_mbytes_per_sec": 0, 00:05:54.931 "w_mbytes_per_sec": 0 00:05:54.931 }, 00:05:54.931 "claimed": true, 00:05:54.931 "claim_type": "exclusive_write", 00:05:54.931 "zoned": false, 00:05:54.931 "supported_io_types": { 00:05:54.931 "read": true, 00:05:54.931 "write": true, 00:05:54.931 "unmap": true, 00:05:54.931 "flush": true, 00:05:54.931 "reset": true, 00:05:54.931 "nvme_admin": false, 00:05:54.931 "nvme_io": false, 00:05:54.931 "nvme_io_md": false, 00:05:54.931 "write_zeroes": true, 00:05:54.931 "zcopy": true, 00:05:54.931 "get_zone_info": false, 00:05:54.931 "zone_management": false, 00:05:54.931 "zone_append": false, 00:05:54.931 "compare": false, 00:05:54.931 "compare_and_write": false, 00:05:54.931 "abort": true, 00:05:54.931 "seek_hole": false, 00:05:54.931 "seek_data": false, 00:05:54.931 "copy": true, 00:05:54.931 "nvme_iov_md": false 00:05:54.931 }, 00:05:54.931 "memory_domains": [ 00:05:54.931 { 00:05:54.931 "dma_device_id": "system", 00:05:54.931 "dma_device_type": 1 00:05:54.931 }, 00:05:54.931 { 00:05:54.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.931 "dma_device_type": 2 00:05:54.931 } 00:05:54.931 ], 00:05:54.931 "driver_specific": {} 00:05:54.931 }, 00:05:54.931 { 00:05:54.931 "name": "Passthru0", 00:05:54.931 "aliases": [ 00:05:54.931 "89d5008d-dbde-53d9-8aa8-612cb55850ab" 00:05:54.931 ], 00:05:54.931 "product_name": "passthru", 00:05:54.931 "block_size": 512, 00:05:54.931 "num_blocks": 16384, 00:05:54.931 "uuid": "89d5008d-dbde-53d9-8aa8-612cb55850ab", 00:05:54.931 "assigned_rate_limits": { 00:05:54.931 "rw_ios_per_sec": 0, 00:05:54.931 "rw_mbytes_per_sec": 0, 00:05:54.931 "r_mbytes_per_sec": 0, 00:05:54.931 "w_mbytes_per_sec": 0 00:05:54.931 }, 00:05:54.931 "claimed": false, 00:05:54.931 "zoned": false, 00:05:54.931 "supported_io_types": { 00:05:54.931 "read": true, 00:05:54.931 "write": true, 00:05:54.931 "unmap": true, 00:05:54.931 "flush": true, 00:05:54.931 "reset": true, 00:05:54.931 "nvme_admin": false, 00:05:54.931 "nvme_io": false, 00:05:54.931 "nvme_io_md": false, 00:05:54.931 "write_zeroes": true, 00:05:54.931 "zcopy": true, 00:05:54.931 "get_zone_info": false, 00:05:54.931 "zone_management": false, 00:05:54.931 "zone_append": false, 00:05:54.931 "compare": false, 00:05:54.931 "compare_and_write": false, 00:05:54.931 "abort": true, 00:05:54.931 "seek_hole": false, 00:05:54.931 "seek_data": false, 00:05:54.931 "copy": true, 00:05:54.931 "nvme_iov_md": false 00:05:54.931 }, 00:05:54.931 "memory_domains": [ 00:05:54.931 { 00:05:54.931 "dma_device_id": "system", 00:05:54.931 "dma_device_type": 1 00:05:54.931 }, 00:05:54.931 { 00:05:54.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.931 "dma_device_type": 2 00:05:54.931 } 00:05:54.931 ], 00:05:54.931 "driver_specific": { 00:05:54.931 "passthru": { 00:05:54.931 "name": "Passthru0", 00:05:54.931 "base_bdev_name": "Malloc2" 00:05:54.931 } 00:05:54.931 } 00:05:54.931 } 00:05:54.931 ]' 00:05:54.931 12:25:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:54.931 12:25:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:54.931 12:25:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:54.931 12:25:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.931 12:25:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.931 12:26:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.931 12:26:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:54.931 12:26:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.931 12:26:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.931 12:26:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.931 12:26:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:54.931 12:26:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.931 12:26:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.931 12:26:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.931 12:26:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:54.931 12:26:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:54.931 ************************************ 00:05:54.931 END TEST rpc_daemon_integrity 00:05:54.931 ************************************ 00:05:54.931 12:26:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:54.931 00:05:54.931 real 0m0.310s 00:05:54.931 user 0m0.190s 00:05:54.931 sys 0m0.048s 00:05:54.931 12:26:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.931 12:26:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.931 12:26:00 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:54.931 12:26:00 rpc -- rpc/rpc.sh@84 -- # killprocess 69262 00:05:54.931 12:26:00 rpc -- common/autotest_common.sh@950 -- # '[' -z 69262 ']' 00:05:54.931 12:26:00 rpc -- common/autotest_common.sh@954 -- # kill -0 69262 00:05:54.931 12:26:00 rpc -- common/autotest_common.sh@955 -- # uname 00:05:54.931 12:26:00 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.931 12:26:00 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69262 00:05:54.931 killing process with pid 69262 00:05:54.931 12:26:00 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:54.931 12:26:00 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:54.931 12:26:00 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69262' 00:05:54.931 12:26:00 rpc -- common/autotest_common.sh@969 -- # kill 69262 00:05:54.931 12:26:00 rpc -- common/autotest_common.sh@974 -- # wait 69262 00:05:55.501 ************************************ 00:05:55.501 END TEST rpc 00:05:55.501 ************************************ 00:05:55.501 00:05:55.501 real 0m2.922s 00:05:55.501 user 0m3.486s 00:05:55.501 sys 0m0.878s 00:05:55.501 12:26:00 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.501 12:26:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.501 12:26:00 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:55.501 12:26:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.501 12:26:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.501 12:26:00 -- common/autotest_common.sh@10 -- # set +x 00:05:55.501 ************************************ 00:05:55.501 START TEST skip_rpc 00:05:55.501 ************************************ 00:05:55.501 12:26:00 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:55.762 * Looking for test storage... 00:05:55.762 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:55.762 12:26:00 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:55.762 12:26:00 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:55.762 12:26:00 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:55.762 12:26:00 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:55.762 12:26:00 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.762 12:26:00 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.762 12:26:00 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.762 12:26:00 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.762 12:26:00 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.762 12:26:00 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.762 12:26:00 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.762 12:26:00 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.762 12:26:00 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.762 12:26:00 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.762 12:26:00 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.762 12:26:00 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:55.762 12:26:00 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:55.762 12:26:00 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.762 12:26:00 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.762 12:26:00 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:55.762 12:26:00 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:55.762 12:26:00 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.762 12:26:00 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:55.762 12:26:00 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.762 12:26:00 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:55.762 12:26:00 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:55.762 12:26:00 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.762 12:26:00 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:55.762 12:26:00 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.762 12:26:00 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.762 12:26:00 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.762 12:26:00 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:55.762 12:26:00 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.762 12:26:00 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:55.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.762 --rc genhtml_branch_coverage=1 00:05:55.762 --rc genhtml_function_coverage=1 00:05:55.762 --rc genhtml_legend=1 00:05:55.762 --rc geninfo_all_blocks=1 00:05:55.762 --rc geninfo_unexecuted_blocks=1 00:05:55.762 00:05:55.762 ' 00:05:55.762 12:26:00 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:55.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.762 --rc genhtml_branch_coverage=1 00:05:55.762 --rc genhtml_function_coverage=1 00:05:55.762 --rc genhtml_legend=1 00:05:55.762 --rc geninfo_all_blocks=1 00:05:55.762 --rc geninfo_unexecuted_blocks=1 00:05:55.762 00:05:55.762 ' 00:05:55.762 12:26:00 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:55.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.762 --rc genhtml_branch_coverage=1 00:05:55.762 --rc genhtml_function_coverage=1 00:05:55.762 --rc genhtml_legend=1 00:05:55.762 --rc geninfo_all_blocks=1 00:05:55.762 --rc geninfo_unexecuted_blocks=1 00:05:55.762 00:05:55.762 ' 00:05:55.762 12:26:00 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:55.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.762 --rc genhtml_branch_coverage=1 00:05:55.762 --rc genhtml_function_coverage=1 00:05:55.762 --rc genhtml_legend=1 00:05:55.762 --rc geninfo_all_blocks=1 00:05:55.762 --rc geninfo_unexecuted_blocks=1 00:05:55.762 00:05:55.762 ' 00:05:55.762 12:26:00 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:55.762 12:26:00 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:55.762 12:26:00 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:55.762 12:26:00 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.762 12:26:00 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.762 12:26:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.762 ************************************ 00:05:55.762 START TEST skip_rpc 00:05:55.762 ************************************ 00:05:55.762 12:26:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:55.762 12:26:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69469 00:05:55.762 12:26:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:55.762 12:26:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:55.762 12:26:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:55.762 [2024-11-19 12:26:00.977575] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:55.762 [2024-11-19 12:26:00.977722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69469 ] 00:05:56.022 [2024-11-19 12:26:01.136566] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.022 [2024-11-19 12:26:01.185840] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.301 12:26:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:01.301 12:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:01.301 12:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:01.301 12:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:01.301 12:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.301 12:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:01.301 12:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.301 12:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:01.301 12:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.301 12:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.301 12:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:01.301 12:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:01.301 12:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:01.301 12:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:01.301 12:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:01.301 12:26:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:01.301 12:26:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69469 00:06:01.301 12:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69469 ']' 00:06:01.301 12:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69469 00:06:01.301 12:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:01.301 12:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:01.301 12:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69469 00:06:01.301 12:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:01.301 12:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:01.301 killing process with pid 69469 00:06:01.301 12:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69469' 00:06:01.301 12:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69469 00:06:01.301 12:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69469 00:06:01.301 00:06:01.301 real 0m5.456s 00:06:01.301 user 0m5.028s 00:06:01.301 sys 0m0.356s 00:06:01.301 12:26:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.301 12:26:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.301 ************************************ 00:06:01.301 END TEST skip_rpc 00:06:01.301 ************************************ 00:06:01.301 12:26:06 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:01.301 12:26:06 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.301 12:26:06 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.301 12:26:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.302 ************************************ 00:06:01.302 START TEST skip_rpc_with_json 00:06:01.302 ************************************ 00:06:01.302 12:26:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:01.302 12:26:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:01.302 12:26:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69551 00:06:01.302 12:26:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.302 12:26:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.302 12:26:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69551 00:06:01.302 12:26:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 69551 ']' 00:06:01.302 12:26:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.302 12:26:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.302 12:26:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.302 12:26:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.302 12:26:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:01.302 [2024-11-19 12:26:06.503039] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:01.302 [2024-11-19 12:26:06.503178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69551 ] 00:06:01.562 [2024-11-19 12:26:06.663465] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.562 [2024-11-19 12:26:06.710958] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.132 12:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.132 12:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:02.132 12:26:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:02.132 12:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.132 12:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.132 [2024-11-19 12:26:07.323356] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:02.132 request: 00:06:02.132 { 00:06:02.132 "trtype": "tcp", 00:06:02.132 "method": "nvmf_get_transports", 00:06:02.132 "req_id": 1 00:06:02.132 } 00:06:02.132 Got JSON-RPC error response 00:06:02.132 response: 00:06:02.132 { 00:06:02.132 "code": -19, 00:06:02.132 "message": "No such device" 00:06:02.132 } 00:06:02.132 12:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:02.132 12:26:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:02.132 12:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.132 12:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.132 [2024-11-19 12:26:07.335445] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:02.132 12:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.132 12:26:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:02.132 12:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.132 12:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.392 12:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.392 12:26:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:02.392 { 00:06:02.392 "subsystems": [ 00:06:02.392 { 00:06:02.392 "subsystem": "fsdev", 00:06:02.392 "config": [ 00:06:02.392 { 00:06:02.392 "method": "fsdev_set_opts", 00:06:02.392 "params": { 00:06:02.392 "fsdev_io_pool_size": 65535, 00:06:02.393 "fsdev_io_cache_size": 256 00:06:02.393 } 00:06:02.393 } 00:06:02.393 ] 00:06:02.393 }, 00:06:02.393 { 00:06:02.393 "subsystem": "keyring", 00:06:02.393 "config": [] 00:06:02.393 }, 00:06:02.393 { 00:06:02.393 "subsystem": "iobuf", 00:06:02.393 "config": [ 00:06:02.393 { 00:06:02.393 "method": "iobuf_set_options", 00:06:02.393 "params": { 00:06:02.393 "small_pool_count": 8192, 00:06:02.393 "large_pool_count": 1024, 00:06:02.393 "small_bufsize": 8192, 00:06:02.393 "large_bufsize": 135168 00:06:02.393 } 00:06:02.393 } 00:06:02.393 ] 00:06:02.393 }, 00:06:02.393 { 00:06:02.393 "subsystem": "sock", 00:06:02.393 "config": [ 00:06:02.393 { 00:06:02.393 "method": "sock_set_default_impl", 00:06:02.393 "params": { 00:06:02.393 "impl_name": "posix" 00:06:02.393 } 00:06:02.393 }, 00:06:02.393 { 00:06:02.393 "method": "sock_impl_set_options", 00:06:02.393 "params": { 00:06:02.393 "impl_name": "ssl", 00:06:02.393 "recv_buf_size": 4096, 00:06:02.393 "send_buf_size": 4096, 00:06:02.393 "enable_recv_pipe": true, 00:06:02.393 "enable_quickack": false, 00:06:02.393 "enable_placement_id": 0, 00:06:02.393 "enable_zerocopy_send_server": true, 00:06:02.393 "enable_zerocopy_send_client": false, 00:06:02.393 "zerocopy_threshold": 0, 00:06:02.393 "tls_version": 0, 00:06:02.393 "enable_ktls": false 00:06:02.393 } 00:06:02.393 }, 00:06:02.393 { 00:06:02.393 "method": "sock_impl_set_options", 00:06:02.393 "params": { 00:06:02.393 "impl_name": "posix", 00:06:02.393 "recv_buf_size": 2097152, 00:06:02.393 "send_buf_size": 2097152, 00:06:02.393 "enable_recv_pipe": true, 00:06:02.393 "enable_quickack": false, 00:06:02.393 "enable_placement_id": 0, 00:06:02.393 "enable_zerocopy_send_server": true, 00:06:02.393 "enable_zerocopy_send_client": false, 00:06:02.393 "zerocopy_threshold": 0, 00:06:02.393 "tls_version": 0, 00:06:02.393 "enable_ktls": false 00:06:02.393 } 00:06:02.393 } 00:06:02.393 ] 00:06:02.393 }, 00:06:02.393 { 00:06:02.393 "subsystem": "vmd", 00:06:02.393 "config": [] 00:06:02.393 }, 00:06:02.393 { 00:06:02.393 "subsystem": "accel", 00:06:02.393 "config": [ 00:06:02.393 { 00:06:02.393 "method": "accel_set_options", 00:06:02.393 "params": { 00:06:02.393 "small_cache_size": 128, 00:06:02.393 "large_cache_size": 16, 00:06:02.393 "task_count": 2048, 00:06:02.393 "sequence_count": 2048, 00:06:02.393 "buf_count": 2048 00:06:02.393 } 00:06:02.393 } 00:06:02.393 ] 00:06:02.393 }, 00:06:02.393 { 00:06:02.393 "subsystem": "bdev", 00:06:02.393 "config": [ 00:06:02.393 { 00:06:02.393 "method": "bdev_set_options", 00:06:02.393 "params": { 00:06:02.393 "bdev_io_pool_size": 65535, 00:06:02.393 "bdev_io_cache_size": 256, 00:06:02.393 "bdev_auto_examine": true, 00:06:02.393 "iobuf_small_cache_size": 128, 00:06:02.393 "iobuf_large_cache_size": 16 00:06:02.393 } 00:06:02.393 }, 00:06:02.393 { 00:06:02.393 "method": "bdev_raid_set_options", 00:06:02.393 "params": { 00:06:02.393 "process_window_size_kb": 1024, 00:06:02.393 "process_max_bandwidth_mb_sec": 0 00:06:02.393 } 00:06:02.393 }, 00:06:02.393 { 00:06:02.393 "method": "bdev_iscsi_set_options", 00:06:02.393 "params": { 00:06:02.393 "timeout_sec": 30 00:06:02.393 } 00:06:02.393 }, 00:06:02.393 { 00:06:02.393 "method": "bdev_nvme_set_options", 00:06:02.393 "params": { 00:06:02.393 "action_on_timeout": "none", 00:06:02.393 "timeout_us": 0, 00:06:02.393 "timeout_admin_us": 0, 00:06:02.393 "keep_alive_timeout_ms": 10000, 00:06:02.393 "arbitration_burst": 0, 00:06:02.393 "low_priority_weight": 0, 00:06:02.393 "medium_priority_weight": 0, 00:06:02.393 "high_priority_weight": 0, 00:06:02.393 "nvme_adminq_poll_period_us": 10000, 00:06:02.393 "nvme_ioq_poll_period_us": 0, 00:06:02.393 "io_queue_requests": 0, 00:06:02.393 "delay_cmd_submit": true, 00:06:02.393 "transport_retry_count": 4, 00:06:02.393 "bdev_retry_count": 3, 00:06:02.393 "transport_ack_timeout": 0, 00:06:02.393 "ctrlr_loss_timeout_sec": 0, 00:06:02.393 "reconnect_delay_sec": 0, 00:06:02.393 "fast_io_fail_timeout_sec": 0, 00:06:02.393 "disable_auto_failback": false, 00:06:02.393 "generate_uuids": false, 00:06:02.393 "transport_tos": 0, 00:06:02.393 "nvme_error_stat": false, 00:06:02.393 "rdma_srq_size": 0, 00:06:02.393 "io_path_stat": false, 00:06:02.393 "allow_accel_sequence": false, 00:06:02.393 "rdma_max_cq_size": 0, 00:06:02.393 "rdma_cm_event_timeout_ms": 0, 00:06:02.393 "dhchap_digests": [ 00:06:02.393 "sha256", 00:06:02.393 "sha384", 00:06:02.393 "sha512" 00:06:02.393 ], 00:06:02.393 "dhchap_dhgroups": [ 00:06:02.393 "null", 00:06:02.393 "ffdhe2048", 00:06:02.393 "ffdhe3072", 00:06:02.393 "ffdhe4096", 00:06:02.393 "ffdhe6144", 00:06:02.393 "ffdhe8192" 00:06:02.393 ] 00:06:02.393 } 00:06:02.393 }, 00:06:02.393 { 00:06:02.393 "method": "bdev_nvme_set_hotplug", 00:06:02.393 "params": { 00:06:02.393 "period_us": 100000, 00:06:02.393 "enable": false 00:06:02.393 } 00:06:02.393 }, 00:06:02.393 { 00:06:02.393 "method": "bdev_wait_for_examine" 00:06:02.393 } 00:06:02.393 ] 00:06:02.393 }, 00:06:02.393 { 00:06:02.393 "subsystem": "scsi", 00:06:02.393 "config": null 00:06:02.393 }, 00:06:02.393 { 00:06:02.393 "subsystem": "scheduler", 00:06:02.393 "config": [ 00:06:02.393 { 00:06:02.393 "method": "framework_set_scheduler", 00:06:02.393 "params": { 00:06:02.393 "name": "static" 00:06:02.393 } 00:06:02.393 } 00:06:02.393 ] 00:06:02.393 }, 00:06:02.393 { 00:06:02.393 "subsystem": "vhost_scsi", 00:06:02.393 "config": [] 00:06:02.393 }, 00:06:02.393 { 00:06:02.393 "subsystem": "vhost_blk", 00:06:02.393 "config": [] 00:06:02.393 }, 00:06:02.393 { 00:06:02.393 "subsystem": "ublk", 00:06:02.393 "config": [] 00:06:02.393 }, 00:06:02.393 { 00:06:02.393 "subsystem": "nbd", 00:06:02.393 "config": [] 00:06:02.393 }, 00:06:02.393 { 00:06:02.393 "subsystem": "nvmf", 00:06:02.393 "config": [ 00:06:02.393 { 00:06:02.393 "method": "nvmf_set_config", 00:06:02.393 "params": { 00:06:02.393 "discovery_filter": "match_any", 00:06:02.393 "admin_cmd_passthru": { 00:06:02.393 "identify_ctrlr": false 00:06:02.393 }, 00:06:02.393 "dhchap_digests": [ 00:06:02.393 "sha256", 00:06:02.393 "sha384", 00:06:02.393 "sha512" 00:06:02.393 ], 00:06:02.393 "dhchap_dhgroups": [ 00:06:02.393 "null", 00:06:02.393 "ffdhe2048", 00:06:02.393 "ffdhe3072", 00:06:02.393 "ffdhe4096", 00:06:02.393 "ffdhe6144", 00:06:02.393 "ffdhe8192" 00:06:02.393 ] 00:06:02.393 } 00:06:02.393 }, 00:06:02.393 { 00:06:02.393 "method": "nvmf_set_max_subsystems", 00:06:02.393 "params": { 00:06:02.393 "max_subsystems": 1024 00:06:02.393 } 00:06:02.393 }, 00:06:02.393 { 00:06:02.393 "method": "nvmf_set_crdt", 00:06:02.393 "params": { 00:06:02.393 "crdt1": 0, 00:06:02.393 "crdt2": 0, 00:06:02.393 "crdt3": 0 00:06:02.393 } 00:06:02.393 }, 00:06:02.393 { 00:06:02.393 "method": "nvmf_create_transport", 00:06:02.393 "params": { 00:06:02.393 "trtype": "TCP", 00:06:02.393 "max_queue_depth": 128, 00:06:02.393 "max_io_qpairs_per_ctrlr": 127, 00:06:02.393 "in_capsule_data_size": 4096, 00:06:02.393 "max_io_size": 131072, 00:06:02.393 "io_unit_size": 131072, 00:06:02.393 "max_aq_depth": 128, 00:06:02.393 "num_shared_buffers": 511, 00:06:02.394 "buf_cache_size": 4294967295, 00:06:02.394 "dif_insert_or_strip": false, 00:06:02.394 "zcopy": false, 00:06:02.394 "c2h_success": true, 00:06:02.394 "sock_priority": 0, 00:06:02.394 "abort_timeout_sec": 1, 00:06:02.394 "ack_timeout": 0, 00:06:02.394 "data_wr_pool_size": 0 00:06:02.394 } 00:06:02.394 } 00:06:02.394 ] 00:06:02.394 }, 00:06:02.394 { 00:06:02.394 "subsystem": "iscsi", 00:06:02.394 "config": [ 00:06:02.394 { 00:06:02.394 "method": "iscsi_set_options", 00:06:02.394 "params": { 00:06:02.394 "node_base": "iqn.2016-06.io.spdk", 00:06:02.394 "max_sessions": 128, 00:06:02.394 "max_connections_per_session": 2, 00:06:02.394 "max_queue_depth": 64, 00:06:02.394 "default_time2wait": 2, 00:06:02.394 "default_time2retain": 20, 00:06:02.394 "first_burst_length": 8192, 00:06:02.394 "immediate_data": true, 00:06:02.394 "allow_duplicated_isid": false, 00:06:02.394 "error_recovery_level": 0, 00:06:02.394 "nop_timeout": 60, 00:06:02.394 "nop_in_interval": 30, 00:06:02.394 "disable_chap": false, 00:06:02.394 "require_chap": false, 00:06:02.394 "mutual_chap": false, 00:06:02.394 "chap_group": 0, 00:06:02.394 "max_large_datain_per_connection": 64, 00:06:02.394 "max_r2t_per_connection": 4, 00:06:02.394 "pdu_pool_size": 36864, 00:06:02.394 "immediate_data_pool_size": 16384, 00:06:02.394 "data_out_pool_size": 2048 00:06:02.394 } 00:06:02.394 } 00:06:02.394 ] 00:06:02.394 } 00:06:02.394 ] 00:06:02.394 } 00:06:02.394 12:26:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:02.394 12:26:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69551 00:06:02.394 12:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69551 ']' 00:06:02.394 12:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69551 00:06:02.394 12:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:02.394 12:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.394 12:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69551 00:06:02.394 12:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.394 12:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.394 killing process with pid 69551 00:06:02.394 12:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69551' 00:06:02.394 12:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69551 00:06:02.394 12:26:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69551 00:06:02.985 12:26:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69579 00:06:02.985 12:26:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:02.985 12:26:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:08.266 12:26:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69579 00:06:08.266 12:26:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69579 ']' 00:06:08.266 12:26:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69579 00:06:08.266 12:26:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:08.266 12:26:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.266 12:26:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69579 00:06:08.266 12:26:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:08.266 12:26:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:08.266 killing process with pid 69579 00:06:08.266 12:26:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69579' 00:06:08.266 12:26:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69579 00:06:08.266 12:26:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69579 00:06:08.266 12:26:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:08.266 12:26:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:08.266 00:06:08.266 real 0m7.004s 00:06:08.266 user 0m6.529s 00:06:08.266 sys 0m0.762s 00:06:08.266 12:26:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.266 12:26:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:08.266 ************************************ 00:06:08.266 END TEST skip_rpc_with_json 00:06:08.266 ************************************ 00:06:08.266 12:26:13 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:08.266 12:26:13 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.266 12:26:13 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.266 12:26:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.266 ************************************ 00:06:08.266 START TEST skip_rpc_with_delay 00:06:08.266 ************************************ 00:06:08.266 12:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:08.266 12:26:13 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:08.266 12:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:08.266 12:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:08.266 12:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.266 12:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.266 12:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.266 12:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.266 12:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.266 12:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.266 12:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.266 12:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:08.266 12:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:08.527 [2024-11-19 12:26:13.592466] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:08.527 [2024-11-19 12:26:13.592613] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:08.527 12:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:08.527 12:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:08.527 12:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:08.527 12:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:08.527 00:06:08.527 real 0m0.179s 00:06:08.527 user 0m0.086s 00:06:08.527 sys 0m0.091s 00:06:08.527 12:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.527 12:26:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:08.527 ************************************ 00:06:08.527 END TEST skip_rpc_with_delay 00:06:08.527 ************************************ 00:06:08.527 12:26:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:08.527 12:26:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:08.527 12:26:13 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:08.527 12:26:13 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.527 12:26:13 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.527 12:26:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.527 ************************************ 00:06:08.527 START TEST exit_on_failed_rpc_init 00:06:08.527 ************************************ 00:06:08.527 12:26:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:08.527 12:26:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69691 00:06:08.527 12:26:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.527 12:26:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69691 00:06:08.527 12:26:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 69691 ']' 00:06:08.527 12:26:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.527 12:26:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.527 12:26:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.527 12:26:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.527 12:26:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:08.787 [2024-11-19 12:26:13.829762] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:08.787 [2024-11-19 12:26:13.829917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69691 ] 00:06:08.787 [2024-11-19 12:26:13.989923] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.787 [2024-11-19 12:26:14.039167] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.727 12:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.727 12:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:09.727 12:26:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.727 12:26:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:09.727 12:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:09.727 12:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:09.727 12:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:09.727 12:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.727 12:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:09.727 12:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.727 12:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:09.727 12:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.727 12:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:09.727 12:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:09.727 12:26:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:09.727 [2024-11-19 12:26:14.726548] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:09.727 [2024-11-19 12:26:14.726674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69709 ] 00:06:09.727 [2024-11-19 12:26:14.885512] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.727 [2024-11-19 12:26:14.933815] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.727 [2024-11-19 12:26:14.933924] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:09.727 [2024-11-19 12:26:14.933942] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:09.727 [2024-11-19 12:26:14.933954] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.987 12:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:09.987 12:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:09.987 12:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:09.987 12:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:09.987 12:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:09.987 12:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:09.987 12:26:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:09.987 12:26:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69691 00:06:09.987 12:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 69691 ']' 00:06:09.987 12:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 69691 00:06:09.987 12:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:09.987 12:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.987 12:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69691 00:06:09.987 12:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:09.987 12:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:09.987 12:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69691' 00:06:09.987 killing process with pid 69691 00:06:09.987 12:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 69691 00:06:09.987 12:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 69691 00:06:10.556 00:06:10.556 real 0m1.785s 00:06:10.556 user 0m1.893s 00:06:10.556 sys 0m0.538s 00:06:10.556 12:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.556 12:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:10.556 ************************************ 00:06:10.556 END TEST exit_on_failed_rpc_init 00:06:10.556 ************************************ 00:06:10.556 12:26:15 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:10.556 00:06:10.556 real 0m14.934s 00:06:10.556 user 0m13.757s 00:06:10.556 sys 0m2.054s 00:06:10.556 12:26:15 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.556 12:26:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.556 ************************************ 00:06:10.556 END TEST skip_rpc 00:06:10.556 ************************************ 00:06:10.556 12:26:15 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:10.556 12:26:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.556 12:26:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.556 12:26:15 -- common/autotest_common.sh@10 -- # set +x 00:06:10.556 ************************************ 00:06:10.556 START TEST rpc_client 00:06:10.556 ************************************ 00:06:10.556 12:26:15 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:10.556 * Looking for test storage... 00:06:10.556 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:10.556 12:26:15 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:10.556 12:26:15 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:10.556 12:26:15 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:10.816 12:26:15 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:10.816 12:26:15 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.816 12:26:15 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.816 12:26:15 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.816 12:26:15 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.816 12:26:15 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.816 12:26:15 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.816 12:26:15 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.816 12:26:15 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.816 12:26:15 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.816 12:26:15 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.816 12:26:15 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.816 12:26:15 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:10.816 12:26:15 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:10.816 12:26:15 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.816 12:26:15 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.816 12:26:15 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:10.816 12:26:15 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:10.816 12:26:15 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.816 12:26:15 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:10.816 12:26:15 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.816 12:26:15 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:10.816 12:26:15 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:10.816 12:26:15 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.816 12:26:15 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:10.816 12:26:15 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.816 12:26:15 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.816 12:26:15 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.816 12:26:15 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:10.816 12:26:15 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.816 12:26:15 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:10.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.816 --rc genhtml_branch_coverage=1 00:06:10.816 --rc genhtml_function_coverage=1 00:06:10.816 --rc genhtml_legend=1 00:06:10.816 --rc geninfo_all_blocks=1 00:06:10.816 --rc geninfo_unexecuted_blocks=1 00:06:10.816 00:06:10.816 ' 00:06:10.816 12:26:15 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:10.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.816 --rc genhtml_branch_coverage=1 00:06:10.816 --rc genhtml_function_coverage=1 00:06:10.816 --rc genhtml_legend=1 00:06:10.816 --rc geninfo_all_blocks=1 00:06:10.816 --rc geninfo_unexecuted_blocks=1 00:06:10.816 00:06:10.816 ' 00:06:10.816 12:26:15 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:10.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.816 --rc genhtml_branch_coverage=1 00:06:10.816 --rc genhtml_function_coverage=1 00:06:10.816 --rc genhtml_legend=1 00:06:10.816 --rc geninfo_all_blocks=1 00:06:10.816 --rc geninfo_unexecuted_blocks=1 00:06:10.816 00:06:10.816 ' 00:06:10.816 12:26:15 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:10.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.816 --rc genhtml_branch_coverage=1 00:06:10.816 --rc genhtml_function_coverage=1 00:06:10.816 --rc genhtml_legend=1 00:06:10.816 --rc geninfo_all_blocks=1 00:06:10.816 --rc geninfo_unexecuted_blocks=1 00:06:10.816 00:06:10.816 ' 00:06:10.816 12:26:15 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:10.816 OK 00:06:10.817 12:26:15 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:10.817 00:06:10.817 real 0m0.302s 00:06:10.817 user 0m0.159s 00:06:10.817 sys 0m0.158s 00:06:10.817 12:26:15 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.817 ************************************ 00:06:10.817 END TEST rpc_client 00:06:10.817 ************************************ 00:06:10.817 12:26:15 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:10.817 12:26:16 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:10.817 12:26:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.817 12:26:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.817 12:26:16 -- common/autotest_common.sh@10 -- # set +x 00:06:10.817 ************************************ 00:06:10.817 START TEST json_config 00:06:10.817 ************************************ 00:06:10.817 12:26:16 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:11.077 12:26:16 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:11.077 12:26:16 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:11.077 12:26:16 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:11.077 12:26:16 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:11.077 12:26:16 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.077 12:26:16 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.077 12:26:16 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.077 12:26:16 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.077 12:26:16 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.077 12:26:16 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.077 12:26:16 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.077 12:26:16 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.077 12:26:16 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.077 12:26:16 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.077 12:26:16 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.077 12:26:16 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:11.077 12:26:16 json_config -- scripts/common.sh@345 -- # : 1 00:06:11.078 12:26:16 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.078 12:26:16 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.078 12:26:16 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:11.078 12:26:16 json_config -- scripts/common.sh@353 -- # local d=1 00:06:11.078 12:26:16 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.078 12:26:16 json_config -- scripts/common.sh@355 -- # echo 1 00:06:11.078 12:26:16 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.078 12:26:16 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:11.078 12:26:16 json_config -- scripts/common.sh@353 -- # local d=2 00:06:11.078 12:26:16 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.078 12:26:16 json_config -- scripts/common.sh@355 -- # echo 2 00:06:11.078 12:26:16 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.078 12:26:16 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.078 12:26:16 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.078 12:26:16 json_config -- scripts/common.sh@368 -- # return 0 00:06:11.078 12:26:16 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.078 12:26:16 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:11.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.078 --rc genhtml_branch_coverage=1 00:06:11.078 --rc genhtml_function_coverage=1 00:06:11.078 --rc genhtml_legend=1 00:06:11.078 --rc geninfo_all_blocks=1 00:06:11.078 --rc geninfo_unexecuted_blocks=1 00:06:11.078 00:06:11.078 ' 00:06:11.078 12:26:16 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:11.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.078 --rc genhtml_branch_coverage=1 00:06:11.078 --rc genhtml_function_coverage=1 00:06:11.078 --rc genhtml_legend=1 00:06:11.078 --rc geninfo_all_blocks=1 00:06:11.078 --rc geninfo_unexecuted_blocks=1 00:06:11.078 00:06:11.078 ' 00:06:11.078 12:26:16 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:11.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.078 --rc genhtml_branch_coverage=1 00:06:11.078 --rc genhtml_function_coverage=1 00:06:11.078 --rc genhtml_legend=1 00:06:11.078 --rc geninfo_all_blocks=1 00:06:11.078 --rc geninfo_unexecuted_blocks=1 00:06:11.078 00:06:11.078 ' 00:06:11.078 12:26:16 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:11.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.078 --rc genhtml_branch_coverage=1 00:06:11.078 --rc genhtml_function_coverage=1 00:06:11.078 --rc genhtml_legend=1 00:06:11.078 --rc geninfo_all_blocks=1 00:06:11.078 --rc geninfo_unexecuted_blocks=1 00:06:11.078 00:06:11.078 ' 00:06:11.078 12:26:16 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:11.078 12:26:16 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:11.078 12:26:16 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:11.078 12:26:16 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:11.078 12:26:16 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:11.078 12:26:16 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:11.078 12:26:16 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:11.078 12:26:16 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:11.078 12:26:16 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:11.078 12:26:16 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:11.078 12:26:16 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:11.078 12:26:16 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:11.078 12:26:16 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f01462e2-3748-4a1e-90b0-ad8a7610ee7d 00:06:11.078 12:26:16 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=f01462e2-3748-4a1e-90b0-ad8a7610ee7d 00:06:11.078 12:26:16 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:11.078 12:26:16 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:11.078 12:26:16 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:11.078 12:26:16 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:11.078 12:26:16 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:11.078 12:26:16 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:11.078 12:26:16 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.078 12:26:16 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.078 12:26:16 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.078 12:26:16 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.078 12:26:16 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.078 12:26:16 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.078 12:26:16 json_config -- paths/export.sh@5 -- # export PATH 00:06:11.078 12:26:16 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.078 12:26:16 json_config -- nvmf/common.sh@51 -- # : 0 00:06:11.078 12:26:16 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:11.078 12:26:16 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:11.078 12:26:16 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:11.078 12:26:16 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:11.078 12:26:16 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:11.078 12:26:16 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:11.078 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:11.078 12:26:16 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:11.078 12:26:16 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:11.078 12:26:16 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:11.078 12:26:16 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:11.078 12:26:16 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:11.078 12:26:16 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:11.078 12:26:16 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:11.078 12:26:16 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:11.078 12:26:16 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:11.078 WARNING: No tests are enabled so not running JSON configuration tests 00:06:11.078 12:26:16 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:11.078 00:06:11.078 real 0m0.232s 00:06:11.078 user 0m0.136s 00:06:11.078 sys 0m0.100s 00:06:11.078 ************************************ 00:06:11.078 END TEST json_config 00:06:11.078 ************************************ 00:06:11.078 12:26:16 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.078 12:26:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.078 12:26:16 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:11.078 12:26:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.078 12:26:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.078 12:26:16 -- common/autotest_common.sh@10 -- # set +x 00:06:11.078 ************************************ 00:06:11.078 START TEST json_config_extra_key 00:06:11.078 ************************************ 00:06:11.078 12:26:16 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:11.339 12:26:16 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:11.339 12:26:16 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:11.339 12:26:16 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:11.339 12:26:16 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:11.339 12:26:16 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.339 12:26:16 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.339 12:26:16 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.339 12:26:16 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.339 12:26:16 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.339 12:26:16 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.339 12:26:16 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.339 12:26:16 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.339 12:26:16 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.339 12:26:16 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.339 12:26:16 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.339 12:26:16 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:11.339 12:26:16 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:11.339 12:26:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.339 12:26:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.339 12:26:16 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:11.339 12:26:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:11.339 12:26:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.339 12:26:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:11.339 12:26:16 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.339 12:26:16 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:11.339 12:26:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:11.339 12:26:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.340 12:26:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:11.340 12:26:16 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.340 12:26:16 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.340 12:26:16 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.340 12:26:16 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:11.340 12:26:16 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.340 12:26:16 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:11.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.340 --rc genhtml_branch_coverage=1 00:06:11.340 --rc genhtml_function_coverage=1 00:06:11.340 --rc genhtml_legend=1 00:06:11.340 --rc geninfo_all_blocks=1 00:06:11.340 --rc geninfo_unexecuted_blocks=1 00:06:11.340 00:06:11.340 ' 00:06:11.340 12:26:16 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:11.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.340 --rc genhtml_branch_coverage=1 00:06:11.340 --rc genhtml_function_coverage=1 00:06:11.340 --rc genhtml_legend=1 00:06:11.340 --rc geninfo_all_blocks=1 00:06:11.340 --rc geninfo_unexecuted_blocks=1 00:06:11.340 00:06:11.340 ' 00:06:11.340 12:26:16 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:11.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.340 --rc genhtml_branch_coverage=1 00:06:11.340 --rc genhtml_function_coverage=1 00:06:11.340 --rc genhtml_legend=1 00:06:11.340 --rc geninfo_all_blocks=1 00:06:11.340 --rc geninfo_unexecuted_blocks=1 00:06:11.340 00:06:11.340 ' 00:06:11.340 12:26:16 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:11.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.340 --rc genhtml_branch_coverage=1 00:06:11.340 --rc genhtml_function_coverage=1 00:06:11.340 --rc genhtml_legend=1 00:06:11.340 --rc geninfo_all_blocks=1 00:06:11.340 --rc geninfo_unexecuted_blocks=1 00:06:11.340 00:06:11.340 ' 00:06:11.340 12:26:16 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:11.340 12:26:16 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:11.340 12:26:16 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:11.340 12:26:16 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:11.340 12:26:16 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:11.340 12:26:16 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:11.340 12:26:16 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:11.340 12:26:16 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:11.340 12:26:16 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:11.340 12:26:16 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:11.340 12:26:16 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:11.340 12:26:16 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:11.340 12:26:16 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f01462e2-3748-4a1e-90b0-ad8a7610ee7d 00:06:11.340 12:26:16 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=f01462e2-3748-4a1e-90b0-ad8a7610ee7d 00:06:11.340 12:26:16 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:11.340 12:26:16 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:11.340 12:26:16 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:11.340 12:26:16 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:11.340 12:26:16 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:11.340 12:26:16 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:11.340 12:26:16 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.340 12:26:16 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.340 12:26:16 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.340 12:26:16 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.340 12:26:16 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.340 12:26:16 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.340 12:26:16 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:11.340 12:26:16 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.340 12:26:16 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:11.340 12:26:16 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:11.340 12:26:16 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:11.340 12:26:16 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:11.340 12:26:16 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:11.340 12:26:16 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:11.340 12:26:16 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:11.340 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:11.340 12:26:16 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:11.340 12:26:16 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:11.340 12:26:16 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:11.340 12:26:16 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:11.340 12:26:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:11.340 12:26:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:11.340 12:26:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:11.340 12:26:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:11.340 12:26:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:11.340 12:26:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:11.340 12:26:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:11.340 12:26:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:11.340 12:26:16 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:11.340 12:26:16 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:11.340 INFO: launching applications... 00:06:11.340 12:26:16 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:11.340 12:26:16 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:11.340 12:26:16 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:11.340 12:26:16 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:11.340 12:26:16 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:11.340 12:26:16 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:11.340 12:26:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:11.340 12:26:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:11.340 12:26:16 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69897 00:06:11.340 12:26:16 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:11.340 Waiting for target to run... 00:06:11.340 12:26:16 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69897 /var/tmp/spdk_tgt.sock 00:06:11.340 12:26:16 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:11.340 12:26:16 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 69897 ']' 00:06:11.340 12:26:16 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:11.340 12:26:16 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.340 12:26:16 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:11.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:11.340 12:26:16 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.340 12:26:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:11.600 [2024-11-19 12:26:16.630787] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:11.601 [2024-11-19 12:26:16.631310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69897 ] 00:06:11.860 [2024-11-19 12:26:17.001464] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.860 [2024-11-19 12:26:17.033499] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.430 00:06:12.430 INFO: shutting down applications... 00:06:12.430 12:26:17 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.430 12:26:17 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:12.430 12:26:17 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:12.430 12:26:17 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:12.430 12:26:17 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:12.430 12:26:17 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:12.430 12:26:17 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:12.430 12:26:17 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69897 ]] 00:06:12.430 12:26:17 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69897 00:06:12.430 12:26:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:12.430 12:26:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:12.430 12:26:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69897 00:06:12.430 12:26:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:13.001 12:26:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:13.001 12:26:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:13.001 12:26:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69897 00:06:13.001 12:26:17 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:13.001 12:26:17 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:13.001 12:26:17 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:13.001 12:26:17 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:13.001 SPDK target shutdown done 00:06:13.001 12:26:17 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:13.001 Success 00:06:13.001 00:06:13.001 real 0m1.645s 00:06:13.001 user 0m1.349s 00:06:13.001 sys 0m0.489s 00:06:13.001 12:26:17 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.001 12:26:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:13.001 ************************************ 00:06:13.001 END TEST json_config_extra_key 00:06:13.001 ************************************ 00:06:13.001 12:26:18 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:13.001 12:26:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.001 12:26:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.001 12:26:18 -- common/autotest_common.sh@10 -- # set +x 00:06:13.001 ************************************ 00:06:13.001 START TEST alias_rpc 00:06:13.001 ************************************ 00:06:13.001 12:26:18 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:13.001 * Looking for test storage... 00:06:13.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:13.001 12:26:18 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:13.001 12:26:18 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:13.001 12:26:18 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:13.001 12:26:18 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:13.001 12:26:18 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.001 12:26:18 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.001 12:26:18 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.001 12:26:18 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.001 12:26:18 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.001 12:26:18 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.001 12:26:18 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.001 12:26:18 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.001 12:26:18 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.001 12:26:18 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.001 12:26:18 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.001 12:26:18 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:13.001 12:26:18 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:13.001 12:26:18 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.001 12:26:18 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.001 12:26:18 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:13.001 12:26:18 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:13.001 12:26:18 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.001 12:26:18 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:13.001 12:26:18 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.001 12:26:18 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:13.001 12:26:18 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:13.001 12:26:18 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.001 12:26:18 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:13.001 12:26:18 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.001 12:26:18 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.001 12:26:18 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.001 12:26:18 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:13.001 12:26:18 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.001 12:26:18 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:13.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.001 --rc genhtml_branch_coverage=1 00:06:13.001 --rc genhtml_function_coverage=1 00:06:13.001 --rc genhtml_legend=1 00:06:13.001 --rc geninfo_all_blocks=1 00:06:13.001 --rc geninfo_unexecuted_blocks=1 00:06:13.001 00:06:13.001 ' 00:06:13.001 12:26:18 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:13.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.001 --rc genhtml_branch_coverage=1 00:06:13.001 --rc genhtml_function_coverage=1 00:06:13.001 --rc genhtml_legend=1 00:06:13.001 --rc geninfo_all_blocks=1 00:06:13.001 --rc geninfo_unexecuted_blocks=1 00:06:13.001 00:06:13.001 ' 00:06:13.001 12:26:18 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:13.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.001 --rc genhtml_branch_coverage=1 00:06:13.001 --rc genhtml_function_coverage=1 00:06:13.001 --rc genhtml_legend=1 00:06:13.001 --rc geninfo_all_blocks=1 00:06:13.001 --rc geninfo_unexecuted_blocks=1 00:06:13.001 00:06:13.001 ' 00:06:13.001 12:26:18 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:13.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.001 --rc genhtml_branch_coverage=1 00:06:13.001 --rc genhtml_function_coverage=1 00:06:13.001 --rc genhtml_legend=1 00:06:13.001 --rc geninfo_all_blocks=1 00:06:13.001 --rc geninfo_unexecuted_blocks=1 00:06:13.001 00:06:13.001 ' 00:06:13.001 12:26:18 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:13.001 12:26:18 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:13.001 12:26:18 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69970 00:06:13.001 12:26:18 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69970 00:06:13.001 12:26:18 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 69970 ']' 00:06:13.001 12:26:18 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.001 12:26:18 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.001 12:26:18 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.001 12:26:18 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.001 12:26:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.261 [2024-11-19 12:26:18.339620] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:13.261 [2024-11-19 12:26:18.339856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69970 ] 00:06:13.261 [2024-11-19 12:26:18.507033] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.520 [2024-11-19 12:26:18.553452] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.088 12:26:19 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.088 12:26:19 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:14.088 12:26:19 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:14.348 12:26:19 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69970 00:06:14.348 12:26:19 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 69970 ']' 00:06:14.348 12:26:19 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 69970 00:06:14.348 12:26:19 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:14.348 12:26:19 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.348 12:26:19 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69970 00:06:14.348 12:26:19 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:14.348 killing process with pid 69970 00:06:14.348 12:26:19 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:14.348 12:26:19 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69970' 00:06:14.348 12:26:19 alias_rpc -- common/autotest_common.sh@969 -- # kill 69970 00:06:14.348 12:26:19 alias_rpc -- common/autotest_common.sh@974 -- # wait 69970 00:06:14.607 ************************************ 00:06:14.607 END TEST alias_rpc 00:06:14.607 ************************************ 00:06:14.607 00:06:14.607 real 0m1.783s 00:06:14.607 user 0m1.767s 00:06:14.607 sys 0m0.530s 00:06:14.607 12:26:19 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.607 12:26:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.607 12:26:19 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:14.607 12:26:19 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:14.607 12:26:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.607 12:26:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.607 12:26:19 -- common/autotest_common.sh@10 -- # set +x 00:06:14.866 ************************************ 00:06:14.866 START TEST spdkcli_tcp 00:06:14.866 ************************************ 00:06:14.866 12:26:19 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:14.866 * Looking for test storage... 00:06:14.866 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:14.866 12:26:19 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:14.866 12:26:19 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:14.866 12:26:19 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:14.866 12:26:20 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:14.866 12:26:20 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.866 12:26:20 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.866 12:26:20 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.866 12:26:20 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.866 12:26:20 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.866 12:26:20 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.866 12:26:20 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.866 12:26:20 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.866 12:26:20 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.866 12:26:20 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.866 12:26:20 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.866 12:26:20 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:14.866 12:26:20 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:14.866 12:26:20 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.866 12:26:20 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.866 12:26:20 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:14.866 12:26:20 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:14.866 12:26:20 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.866 12:26:20 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:14.866 12:26:20 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.866 12:26:20 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:14.866 12:26:20 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:14.866 12:26:20 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.866 12:26:20 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:14.866 12:26:20 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.866 12:26:20 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.866 12:26:20 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.866 12:26:20 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:14.866 12:26:20 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.866 12:26:20 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:14.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.866 --rc genhtml_branch_coverage=1 00:06:14.866 --rc genhtml_function_coverage=1 00:06:14.866 --rc genhtml_legend=1 00:06:14.866 --rc geninfo_all_blocks=1 00:06:14.866 --rc geninfo_unexecuted_blocks=1 00:06:14.866 00:06:14.866 ' 00:06:14.866 12:26:20 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:14.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.866 --rc genhtml_branch_coverage=1 00:06:14.866 --rc genhtml_function_coverage=1 00:06:14.866 --rc genhtml_legend=1 00:06:14.866 --rc geninfo_all_blocks=1 00:06:14.866 --rc geninfo_unexecuted_blocks=1 00:06:14.866 00:06:14.866 ' 00:06:14.866 12:26:20 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:14.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.866 --rc genhtml_branch_coverage=1 00:06:14.866 --rc genhtml_function_coverage=1 00:06:14.866 --rc genhtml_legend=1 00:06:14.866 --rc geninfo_all_blocks=1 00:06:14.866 --rc geninfo_unexecuted_blocks=1 00:06:14.866 00:06:14.866 ' 00:06:14.866 12:26:20 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:14.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.866 --rc genhtml_branch_coverage=1 00:06:14.866 --rc genhtml_function_coverage=1 00:06:14.866 --rc genhtml_legend=1 00:06:14.866 --rc geninfo_all_blocks=1 00:06:14.866 --rc geninfo_unexecuted_blocks=1 00:06:14.866 00:06:14.866 ' 00:06:14.866 12:26:20 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:14.866 12:26:20 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:14.866 12:26:20 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:14.866 12:26:20 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:14.866 12:26:20 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:14.866 12:26:20 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:14.866 12:26:20 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:14.866 12:26:20 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:14.866 12:26:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.866 12:26:20 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70050 00:06:14.866 12:26:20 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:14.866 12:26:20 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70050 00:06:14.866 12:26:20 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 70050 ']' 00:06:14.866 12:26:20 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.866 12:26:20 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.866 12:26:20 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.866 12:26:20 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.866 12:26:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:15.125 [2024-11-19 12:26:20.204830] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:15.125 [2024-11-19 12:26:20.205035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70050 ] 00:06:15.125 [2024-11-19 12:26:20.371315] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.387 [2024-11-19 12:26:20.422284] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.387 [2024-11-19 12:26:20.422422] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.958 12:26:21 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.958 12:26:21 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:15.958 12:26:21 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:15.958 12:26:21 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70067 00:06:15.958 12:26:21 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:15.958 [ 00:06:15.958 "bdev_malloc_delete", 00:06:15.958 "bdev_malloc_create", 00:06:15.958 "bdev_null_resize", 00:06:15.958 "bdev_null_delete", 00:06:15.958 "bdev_null_create", 00:06:15.958 "bdev_nvme_cuse_unregister", 00:06:15.958 "bdev_nvme_cuse_register", 00:06:15.958 "bdev_opal_new_user", 00:06:15.958 "bdev_opal_set_lock_state", 00:06:15.958 "bdev_opal_delete", 00:06:15.958 "bdev_opal_get_info", 00:06:15.958 "bdev_opal_create", 00:06:15.958 "bdev_nvme_opal_revert", 00:06:15.958 "bdev_nvme_opal_init", 00:06:15.958 "bdev_nvme_send_cmd", 00:06:15.958 "bdev_nvme_set_keys", 00:06:15.958 "bdev_nvme_get_path_iostat", 00:06:15.958 "bdev_nvme_get_mdns_discovery_info", 00:06:15.958 "bdev_nvme_stop_mdns_discovery", 00:06:15.958 "bdev_nvme_start_mdns_discovery", 00:06:15.958 "bdev_nvme_set_multipath_policy", 00:06:15.958 "bdev_nvme_set_preferred_path", 00:06:15.958 "bdev_nvme_get_io_paths", 00:06:15.958 "bdev_nvme_remove_error_injection", 00:06:15.958 "bdev_nvme_add_error_injection", 00:06:15.958 "bdev_nvme_get_discovery_info", 00:06:15.958 "bdev_nvme_stop_discovery", 00:06:15.958 "bdev_nvme_start_discovery", 00:06:15.958 "bdev_nvme_get_controller_health_info", 00:06:15.958 "bdev_nvme_disable_controller", 00:06:15.958 "bdev_nvme_enable_controller", 00:06:15.958 "bdev_nvme_reset_controller", 00:06:15.958 "bdev_nvme_get_transport_statistics", 00:06:15.958 "bdev_nvme_apply_firmware", 00:06:15.958 "bdev_nvme_detach_controller", 00:06:15.958 "bdev_nvme_get_controllers", 00:06:15.958 "bdev_nvme_attach_controller", 00:06:15.958 "bdev_nvme_set_hotplug", 00:06:15.958 "bdev_nvme_set_options", 00:06:15.958 "bdev_passthru_delete", 00:06:15.958 "bdev_passthru_create", 00:06:15.958 "bdev_lvol_set_parent_bdev", 00:06:15.958 "bdev_lvol_set_parent", 00:06:15.958 "bdev_lvol_check_shallow_copy", 00:06:15.958 "bdev_lvol_start_shallow_copy", 00:06:15.958 "bdev_lvol_grow_lvstore", 00:06:15.958 "bdev_lvol_get_lvols", 00:06:15.958 "bdev_lvol_get_lvstores", 00:06:15.958 "bdev_lvol_delete", 00:06:15.958 "bdev_lvol_set_read_only", 00:06:15.958 "bdev_lvol_resize", 00:06:15.958 "bdev_lvol_decouple_parent", 00:06:15.958 "bdev_lvol_inflate", 00:06:15.958 "bdev_lvol_rename", 00:06:15.958 "bdev_lvol_clone_bdev", 00:06:15.958 "bdev_lvol_clone", 00:06:15.959 "bdev_lvol_snapshot", 00:06:15.959 "bdev_lvol_create", 00:06:15.959 "bdev_lvol_delete_lvstore", 00:06:15.959 "bdev_lvol_rename_lvstore", 00:06:15.959 "bdev_lvol_create_lvstore", 00:06:15.959 "bdev_raid_set_options", 00:06:15.959 "bdev_raid_remove_base_bdev", 00:06:15.959 "bdev_raid_add_base_bdev", 00:06:15.959 "bdev_raid_delete", 00:06:15.959 "bdev_raid_create", 00:06:15.959 "bdev_raid_get_bdevs", 00:06:15.959 "bdev_error_inject_error", 00:06:15.959 "bdev_error_delete", 00:06:15.959 "bdev_error_create", 00:06:15.959 "bdev_split_delete", 00:06:15.959 "bdev_split_create", 00:06:15.959 "bdev_delay_delete", 00:06:15.959 "bdev_delay_create", 00:06:15.959 "bdev_delay_update_latency", 00:06:15.959 "bdev_zone_block_delete", 00:06:15.959 "bdev_zone_block_create", 00:06:15.959 "blobfs_create", 00:06:15.959 "blobfs_detect", 00:06:15.959 "blobfs_set_cache_size", 00:06:15.959 "bdev_aio_delete", 00:06:15.959 "bdev_aio_rescan", 00:06:15.959 "bdev_aio_create", 00:06:15.959 "bdev_ftl_set_property", 00:06:15.959 "bdev_ftl_get_properties", 00:06:15.959 "bdev_ftl_get_stats", 00:06:15.959 "bdev_ftl_unmap", 00:06:15.959 "bdev_ftl_unload", 00:06:15.959 "bdev_ftl_delete", 00:06:15.959 "bdev_ftl_load", 00:06:15.959 "bdev_ftl_create", 00:06:15.959 "bdev_virtio_attach_controller", 00:06:15.959 "bdev_virtio_scsi_get_devices", 00:06:15.959 "bdev_virtio_detach_controller", 00:06:15.959 "bdev_virtio_blk_set_hotplug", 00:06:15.959 "bdev_iscsi_delete", 00:06:15.959 "bdev_iscsi_create", 00:06:15.959 "bdev_iscsi_set_options", 00:06:15.959 "accel_error_inject_error", 00:06:15.959 "ioat_scan_accel_module", 00:06:15.959 "dsa_scan_accel_module", 00:06:15.959 "iaa_scan_accel_module", 00:06:15.959 "keyring_file_remove_key", 00:06:15.959 "keyring_file_add_key", 00:06:15.959 "keyring_linux_set_options", 00:06:15.959 "fsdev_aio_delete", 00:06:15.959 "fsdev_aio_create", 00:06:15.959 "iscsi_get_histogram", 00:06:15.959 "iscsi_enable_histogram", 00:06:15.959 "iscsi_set_options", 00:06:15.959 "iscsi_get_auth_groups", 00:06:15.959 "iscsi_auth_group_remove_secret", 00:06:15.959 "iscsi_auth_group_add_secret", 00:06:15.959 "iscsi_delete_auth_group", 00:06:15.959 "iscsi_create_auth_group", 00:06:15.959 "iscsi_set_discovery_auth", 00:06:15.959 "iscsi_get_options", 00:06:15.959 "iscsi_target_node_request_logout", 00:06:15.959 "iscsi_target_node_set_redirect", 00:06:15.959 "iscsi_target_node_set_auth", 00:06:15.959 "iscsi_target_node_add_lun", 00:06:15.959 "iscsi_get_stats", 00:06:15.959 "iscsi_get_connections", 00:06:15.959 "iscsi_portal_group_set_auth", 00:06:15.959 "iscsi_start_portal_group", 00:06:15.959 "iscsi_delete_portal_group", 00:06:15.959 "iscsi_create_portal_group", 00:06:15.959 "iscsi_get_portal_groups", 00:06:15.959 "iscsi_delete_target_node", 00:06:15.959 "iscsi_target_node_remove_pg_ig_maps", 00:06:15.959 "iscsi_target_node_add_pg_ig_maps", 00:06:15.959 "iscsi_create_target_node", 00:06:15.959 "iscsi_get_target_nodes", 00:06:15.959 "iscsi_delete_initiator_group", 00:06:15.959 "iscsi_initiator_group_remove_initiators", 00:06:15.959 "iscsi_initiator_group_add_initiators", 00:06:15.959 "iscsi_create_initiator_group", 00:06:15.959 "iscsi_get_initiator_groups", 00:06:15.959 "nvmf_set_crdt", 00:06:15.959 "nvmf_set_config", 00:06:15.959 "nvmf_set_max_subsystems", 00:06:15.959 "nvmf_stop_mdns_prr", 00:06:15.959 "nvmf_publish_mdns_prr", 00:06:15.959 "nvmf_subsystem_get_listeners", 00:06:15.959 "nvmf_subsystem_get_qpairs", 00:06:15.959 "nvmf_subsystem_get_controllers", 00:06:15.959 "nvmf_get_stats", 00:06:15.959 "nvmf_get_transports", 00:06:15.959 "nvmf_create_transport", 00:06:15.959 "nvmf_get_targets", 00:06:15.959 "nvmf_delete_target", 00:06:15.959 "nvmf_create_target", 00:06:15.959 "nvmf_subsystem_allow_any_host", 00:06:15.959 "nvmf_subsystem_set_keys", 00:06:15.959 "nvmf_subsystem_remove_host", 00:06:15.959 "nvmf_subsystem_add_host", 00:06:15.959 "nvmf_ns_remove_host", 00:06:15.959 "nvmf_ns_add_host", 00:06:15.959 "nvmf_subsystem_remove_ns", 00:06:15.959 "nvmf_subsystem_set_ns_ana_group", 00:06:15.959 "nvmf_subsystem_add_ns", 00:06:15.959 "nvmf_subsystem_listener_set_ana_state", 00:06:15.959 "nvmf_discovery_get_referrals", 00:06:15.959 "nvmf_discovery_remove_referral", 00:06:15.959 "nvmf_discovery_add_referral", 00:06:15.959 "nvmf_subsystem_remove_listener", 00:06:15.959 "nvmf_subsystem_add_listener", 00:06:15.959 "nvmf_delete_subsystem", 00:06:15.959 "nvmf_create_subsystem", 00:06:15.959 "nvmf_get_subsystems", 00:06:15.959 "env_dpdk_get_mem_stats", 00:06:15.959 "nbd_get_disks", 00:06:15.959 "nbd_stop_disk", 00:06:15.959 "nbd_start_disk", 00:06:15.959 "ublk_recover_disk", 00:06:15.959 "ublk_get_disks", 00:06:15.959 "ublk_stop_disk", 00:06:15.959 "ublk_start_disk", 00:06:15.959 "ublk_destroy_target", 00:06:15.959 "ublk_create_target", 00:06:15.959 "virtio_blk_create_transport", 00:06:15.959 "virtio_blk_get_transports", 00:06:15.959 "vhost_controller_set_coalescing", 00:06:15.959 "vhost_get_controllers", 00:06:15.959 "vhost_delete_controller", 00:06:15.959 "vhost_create_blk_controller", 00:06:15.959 "vhost_scsi_controller_remove_target", 00:06:15.959 "vhost_scsi_controller_add_target", 00:06:15.959 "vhost_start_scsi_controller", 00:06:15.959 "vhost_create_scsi_controller", 00:06:15.959 "thread_set_cpumask", 00:06:15.959 "scheduler_set_options", 00:06:15.959 "framework_get_governor", 00:06:15.959 "framework_get_scheduler", 00:06:15.959 "framework_set_scheduler", 00:06:15.959 "framework_get_reactors", 00:06:15.959 "thread_get_io_channels", 00:06:15.959 "thread_get_pollers", 00:06:15.959 "thread_get_stats", 00:06:15.959 "framework_monitor_context_switch", 00:06:15.959 "spdk_kill_instance", 00:06:15.959 "log_enable_timestamps", 00:06:15.959 "log_get_flags", 00:06:15.959 "log_clear_flag", 00:06:15.959 "log_set_flag", 00:06:15.959 "log_get_level", 00:06:15.959 "log_set_level", 00:06:15.959 "log_get_print_level", 00:06:15.959 "log_set_print_level", 00:06:15.959 "framework_enable_cpumask_locks", 00:06:15.959 "framework_disable_cpumask_locks", 00:06:15.959 "framework_wait_init", 00:06:15.959 "framework_start_init", 00:06:15.959 "scsi_get_devices", 00:06:15.959 "bdev_get_histogram", 00:06:15.959 "bdev_enable_histogram", 00:06:15.959 "bdev_set_qos_limit", 00:06:15.959 "bdev_set_qd_sampling_period", 00:06:15.959 "bdev_get_bdevs", 00:06:15.959 "bdev_reset_iostat", 00:06:15.959 "bdev_get_iostat", 00:06:15.959 "bdev_examine", 00:06:15.959 "bdev_wait_for_examine", 00:06:15.959 "bdev_set_options", 00:06:15.959 "accel_get_stats", 00:06:15.959 "accel_set_options", 00:06:15.959 "accel_set_driver", 00:06:15.959 "accel_crypto_key_destroy", 00:06:15.959 "accel_crypto_keys_get", 00:06:15.959 "accel_crypto_key_create", 00:06:15.959 "accel_assign_opc", 00:06:15.959 "accel_get_module_info", 00:06:15.959 "accel_get_opc_assignments", 00:06:15.959 "vmd_rescan", 00:06:15.959 "vmd_remove_device", 00:06:15.959 "vmd_enable", 00:06:15.959 "sock_get_default_impl", 00:06:15.959 "sock_set_default_impl", 00:06:15.959 "sock_impl_set_options", 00:06:15.959 "sock_impl_get_options", 00:06:15.959 "iobuf_get_stats", 00:06:15.959 "iobuf_set_options", 00:06:15.959 "keyring_get_keys", 00:06:15.959 "framework_get_pci_devices", 00:06:15.959 "framework_get_config", 00:06:15.959 "framework_get_subsystems", 00:06:15.959 "fsdev_set_opts", 00:06:15.959 "fsdev_get_opts", 00:06:15.959 "trace_get_info", 00:06:15.959 "trace_get_tpoint_group_mask", 00:06:15.959 "trace_disable_tpoint_group", 00:06:15.959 "trace_enable_tpoint_group", 00:06:15.959 "trace_clear_tpoint_mask", 00:06:15.959 "trace_set_tpoint_mask", 00:06:15.959 "notify_get_notifications", 00:06:15.959 "notify_get_types", 00:06:15.959 "spdk_get_version", 00:06:15.959 "rpc_get_methods" 00:06:15.959 ] 00:06:16.218 12:26:21 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:16.218 12:26:21 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:16.218 12:26:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:16.218 12:26:21 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:16.218 12:26:21 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70050 00:06:16.218 12:26:21 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 70050 ']' 00:06:16.218 12:26:21 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 70050 00:06:16.218 12:26:21 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:16.218 12:26:21 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:16.218 12:26:21 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70050 00:06:16.218 12:26:21 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:16.218 12:26:21 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:16.218 12:26:21 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70050' 00:06:16.218 killing process with pid 70050 00:06:16.218 12:26:21 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 70050 00:06:16.218 12:26:21 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 70050 00:06:16.786 00:06:16.786 real 0m2.123s 00:06:16.786 user 0m3.530s 00:06:16.786 sys 0m0.564s 00:06:16.786 12:26:21 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.786 12:26:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:16.786 ************************************ 00:06:16.786 END TEST spdkcli_tcp 00:06:16.786 ************************************ 00:06:16.786 12:26:22 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:17.046 12:26:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.046 12:26:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.046 12:26:22 -- common/autotest_common.sh@10 -- # set +x 00:06:17.046 ************************************ 00:06:17.046 START TEST dpdk_mem_utility 00:06:17.046 ************************************ 00:06:17.046 12:26:22 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:17.046 * Looking for test storage... 00:06:17.046 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:17.046 12:26:22 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:17.046 12:26:22 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:17.046 12:26:22 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:17.046 12:26:22 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:17.046 12:26:22 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.046 12:26:22 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.046 12:26:22 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.046 12:26:22 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.046 12:26:22 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.046 12:26:22 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.046 12:26:22 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.046 12:26:22 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.046 12:26:22 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.046 12:26:22 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.046 12:26:22 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.047 12:26:22 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:17.047 12:26:22 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:17.047 12:26:22 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.047 12:26:22 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.047 12:26:22 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:17.047 12:26:22 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:17.047 12:26:22 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.047 12:26:22 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:17.047 12:26:22 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.047 12:26:22 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:17.047 12:26:22 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:17.047 12:26:22 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.047 12:26:22 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:17.047 12:26:22 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.047 12:26:22 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.047 12:26:22 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.047 12:26:22 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:17.047 12:26:22 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.047 12:26:22 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:17.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.047 --rc genhtml_branch_coverage=1 00:06:17.047 --rc genhtml_function_coverage=1 00:06:17.047 --rc genhtml_legend=1 00:06:17.047 --rc geninfo_all_blocks=1 00:06:17.047 --rc geninfo_unexecuted_blocks=1 00:06:17.047 00:06:17.047 ' 00:06:17.047 12:26:22 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:17.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.047 --rc genhtml_branch_coverage=1 00:06:17.047 --rc genhtml_function_coverage=1 00:06:17.047 --rc genhtml_legend=1 00:06:17.047 --rc geninfo_all_blocks=1 00:06:17.047 --rc geninfo_unexecuted_blocks=1 00:06:17.047 00:06:17.047 ' 00:06:17.047 12:26:22 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:17.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.047 --rc genhtml_branch_coverage=1 00:06:17.047 --rc genhtml_function_coverage=1 00:06:17.047 --rc genhtml_legend=1 00:06:17.047 --rc geninfo_all_blocks=1 00:06:17.047 --rc geninfo_unexecuted_blocks=1 00:06:17.047 00:06:17.047 ' 00:06:17.047 12:26:22 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:17.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.047 --rc genhtml_branch_coverage=1 00:06:17.047 --rc genhtml_function_coverage=1 00:06:17.047 --rc genhtml_legend=1 00:06:17.047 --rc geninfo_all_blocks=1 00:06:17.047 --rc geninfo_unexecuted_blocks=1 00:06:17.047 00:06:17.047 ' 00:06:17.047 12:26:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:17.047 12:26:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70150 00:06:17.047 12:26:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:17.047 12:26:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70150 00:06:17.047 12:26:22 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 70150 ']' 00:06:17.047 12:26:22 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.047 12:26:22 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.047 12:26:22 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.047 12:26:22 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.047 12:26:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:17.307 [2024-11-19 12:26:22.381078] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:17.307 [2024-11-19 12:26:22.381221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70150 ] 00:06:17.307 [2024-11-19 12:26:22.542549] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.566 [2024-11-19 12:26:22.620658] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.136 12:26:23 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.136 12:26:23 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:18.136 12:26:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:18.136 12:26:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:18.136 12:26:23 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.136 12:26:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:18.136 { 00:06:18.136 "filename": "/tmp/spdk_mem_dump.txt" 00:06:18.136 } 00:06:18.136 12:26:23 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.136 12:26:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:18.136 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:18.136 1 heaps totaling size 860.000000 MiB 00:06:18.136 size: 860.000000 MiB heap id: 0 00:06:18.136 end heaps---------- 00:06:18.136 9 mempools totaling size 642.649841 MiB 00:06:18.136 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:18.136 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:18.136 size: 92.545471 MiB name: bdev_io_70150 00:06:18.136 size: 51.011292 MiB name: evtpool_70150 00:06:18.136 size: 50.003479 MiB name: msgpool_70150 00:06:18.136 size: 36.509338 MiB name: fsdev_io_70150 00:06:18.136 size: 21.763794 MiB name: PDU_Pool 00:06:18.136 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:18.136 size: 0.026123 MiB name: Session_Pool 00:06:18.136 end mempools------- 00:06:18.136 6 memzones totaling size 4.142822 MiB 00:06:18.136 size: 1.000366 MiB name: RG_ring_0_70150 00:06:18.136 size: 1.000366 MiB name: RG_ring_1_70150 00:06:18.136 size: 1.000366 MiB name: RG_ring_4_70150 00:06:18.136 size: 1.000366 MiB name: RG_ring_5_70150 00:06:18.136 size: 0.125366 MiB name: RG_ring_2_70150 00:06:18.136 size: 0.015991 MiB name: RG_ring_3_70150 00:06:18.136 end memzones------- 00:06:18.136 12:26:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:18.136 heap id: 0 total size: 860.000000 MiB number of busy elements: 304 number of free elements: 16 00:06:18.136 list of free elements. size: 13.937073 MiB 00:06:18.136 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:18.136 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:18.136 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:18.136 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:18.136 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:18.136 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:18.136 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:18.136 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:18.136 element at address: 0x200000200000 with size: 0.835022 MiB 00:06:18.136 element at address: 0x20001d800000 with size: 0.567505 MiB 00:06:18.136 element at address: 0x20000d800000 with size: 0.489258 MiB 00:06:18.136 element at address: 0x200003e00000 with size: 0.489014 MiB 00:06:18.136 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:18.136 element at address: 0x200007000000 with size: 0.480286 MiB 00:06:18.136 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:06:18.136 element at address: 0x200003a00000 with size: 0.353210 MiB 00:06:18.136 list of standard malloc elements. size: 199.266235 MiB 00:06:18.136 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:18.136 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:18.136 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:18.136 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:18.136 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:18.136 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:18.136 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:18.136 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:18.136 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:18.136 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:18.136 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003a5a6c0 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003a5eb80 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003aff880 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:06:18.136 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:06:18.137 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:06:18.137 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:06:18.137 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:06:18.137 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:06:18.137 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:06:18.137 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:06:18.137 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:06:18.137 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:06:18.137 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:06:18.137 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:06:18.137 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:06:18.137 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:06:18.137 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:06:18.137 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:06:18.137 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:06:18.137 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20000707af40 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20000707b000 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20000707b180 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20000707b240 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20000707b300 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20000707b480 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20000707b540 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:18.137 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:18.137 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d891480 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d891540 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d891600 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d8916c0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d891780 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d891840 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d891900 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d892080 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d892140 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d892200 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d892380 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d892440 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d892500 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d892680 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d892740 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d892800 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d892980 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d893040 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d893100 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d893280 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d893340 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d893400 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d893580 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d893640 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d893700 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d893880 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d893940 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d894000 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d894180 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d894240 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d894300 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d894480 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d894540 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d894600 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d894780 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d894840 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d894900 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d895080 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d895140 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d895200 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:06:18.137 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:18.138 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:18.138 list of memzone associated elements. size: 646.796692 MiB 00:06:18.138 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:18.138 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:18.138 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:18.138 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:18.138 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:18.138 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70150_0 00:06:18.138 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:18.138 associated memzone info: size: 48.002930 MiB name: MP_evtpool_70150_0 00:06:18.138 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:18.138 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70150_0 00:06:18.138 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:18.138 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70150_0 00:06:18.138 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:18.138 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:18.138 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:18.138 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:18.138 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:18.138 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_70150 00:06:18.138 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:18.138 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70150 00:06:18.138 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:18.138 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70150 00:06:18.138 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:18.138 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:18.138 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:18.138 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:18.138 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:18.138 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:18.138 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:18.138 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:18.138 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:18.138 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70150 00:06:18.138 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:18.138 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70150 00:06:18.138 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:18.138 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70150 00:06:18.138 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:18.138 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70150 00:06:18.138 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:06:18.138 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70150 00:06:18.138 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:06:18.138 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70150 00:06:18.138 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:18.138 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:18.138 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:18.138 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:18.138 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:18.138 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:18.138 element at address: 0x200003a5ec40 with size: 0.125488 MiB 00:06:18.138 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70150 00:06:18.138 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:18.138 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:18.138 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:06:18.138 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:18.138 element at address: 0x200003a5a980 with size: 0.016113 MiB 00:06:18.138 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70150 00:06:18.138 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:06:18.138 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:18.138 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:18.138 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70150 00:06:18.138 element at address: 0x200003aff940 with size: 0.000305 MiB 00:06:18.138 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70150 00:06:18.138 element at address: 0x200003a5a780 with size: 0.000305 MiB 00:06:18.138 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70150 00:06:18.138 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:06:18.138 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:18.138 12:26:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:18.138 12:26:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70150 00:06:18.138 12:26:23 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 70150 ']' 00:06:18.138 12:26:23 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 70150 00:06:18.138 12:26:23 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:18.138 12:26:23 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.138 12:26:23 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70150 00:06:18.139 12:26:23 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:18.139 12:26:23 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:18.139 12:26:23 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70150' 00:06:18.139 killing process with pid 70150 00:06:18.139 12:26:23 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 70150 00:06:18.139 12:26:23 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 70150 00:06:19.077 00:06:19.077 real 0m1.971s 00:06:19.077 user 0m1.746s 00:06:19.077 sys 0m0.674s 00:06:19.077 12:26:24 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.077 12:26:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:19.077 ************************************ 00:06:19.077 END TEST dpdk_mem_utility 00:06:19.077 ************************************ 00:06:19.077 12:26:24 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:19.077 12:26:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.077 12:26:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.077 12:26:24 -- common/autotest_common.sh@10 -- # set +x 00:06:19.077 ************************************ 00:06:19.077 START TEST event 00:06:19.077 ************************************ 00:06:19.077 12:26:24 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:19.077 * Looking for test storage... 00:06:19.077 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:19.077 12:26:24 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:19.077 12:26:24 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:19.077 12:26:24 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:19.077 12:26:24 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:19.077 12:26:24 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.077 12:26:24 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.077 12:26:24 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.077 12:26:24 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.077 12:26:24 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.077 12:26:24 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.077 12:26:24 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.077 12:26:24 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.077 12:26:24 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.077 12:26:24 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.077 12:26:24 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.077 12:26:24 event -- scripts/common.sh@344 -- # case "$op" in 00:06:19.077 12:26:24 event -- scripts/common.sh@345 -- # : 1 00:06:19.077 12:26:24 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.077 12:26:24 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.077 12:26:24 event -- scripts/common.sh@365 -- # decimal 1 00:06:19.077 12:26:24 event -- scripts/common.sh@353 -- # local d=1 00:06:19.077 12:26:24 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.077 12:26:24 event -- scripts/common.sh@355 -- # echo 1 00:06:19.077 12:26:24 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.077 12:26:24 event -- scripts/common.sh@366 -- # decimal 2 00:06:19.077 12:26:24 event -- scripts/common.sh@353 -- # local d=2 00:06:19.077 12:26:24 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.077 12:26:24 event -- scripts/common.sh@355 -- # echo 2 00:06:19.077 12:26:24 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.077 12:26:24 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.077 12:26:24 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.077 12:26:24 event -- scripts/common.sh@368 -- # return 0 00:06:19.077 12:26:24 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.077 12:26:24 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:19.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.077 --rc genhtml_branch_coverage=1 00:06:19.077 --rc genhtml_function_coverage=1 00:06:19.077 --rc genhtml_legend=1 00:06:19.077 --rc geninfo_all_blocks=1 00:06:19.077 --rc geninfo_unexecuted_blocks=1 00:06:19.077 00:06:19.077 ' 00:06:19.077 12:26:24 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:19.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.077 --rc genhtml_branch_coverage=1 00:06:19.077 --rc genhtml_function_coverage=1 00:06:19.077 --rc genhtml_legend=1 00:06:19.077 --rc geninfo_all_blocks=1 00:06:19.078 --rc geninfo_unexecuted_blocks=1 00:06:19.078 00:06:19.078 ' 00:06:19.078 12:26:24 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:19.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.078 --rc genhtml_branch_coverage=1 00:06:19.078 --rc genhtml_function_coverage=1 00:06:19.078 --rc genhtml_legend=1 00:06:19.078 --rc geninfo_all_blocks=1 00:06:19.078 --rc geninfo_unexecuted_blocks=1 00:06:19.078 00:06:19.078 ' 00:06:19.078 12:26:24 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:19.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.078 --rc genhtml_branch_coverage=1 00:06:19.078 --rc genhtml_function_coverage=1 00:06:19.078 --rc genhtml_legend=1 00:06:19.078 --rc geninfo_all_blocks=1 00:06:19.078 --rc geninfo_unexecuted_blocks=1 00:06:19.078 00:06:19.078 ' 00:06:19.078 12:26:24 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:19.078 12:26:24 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:19.078 12:26:24 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:19.078 12:26:24 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:19.078 12:26:24 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.078 12:26:24 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.337 ************************************ 00:06:19.337 START TEST event_perf 00:06:19.337 ************************************ 00:06:19.337 12:26:24 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:19.337 Running I/O for 1 seconds...[2024-11-19 12:26:24.389069] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:19.337 [2024-11-19 12:26:24.389245] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70236 ] 00:06:19.337 [2024-11-19 12:26:24.552786] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:19.595 [2024-11-19 12:26:24.627417] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.595 [2024-11-19 12:26:24.627658] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.595 [2024-11-19 12:26:24.627687] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.595 [2024-11-19 12:26:24.627853] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.531 Running I/O for 1 seconds... 00:06:20.531 lcore 0: 70509 00:06:20.531 lcore 1: 70512 00:06:20.531 lcore 2: 70502 00:06:20.531 lcore 3: 70505 00:06:20.531 done. 00:06:20.531 ************************************ 00:06:20.531 END TEST event_perf 00:06:20.531 ************************************ 00:06:20.531 00:06:20.531 real 0m1.427s 00:06:20.531 user 0m4.157s 00:06:20.531 sys 0m0.145s 00:06:20.531 12:26:25 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.531 12:26:25 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:20.790 12:26:25 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:20.790 12:26:25 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:20.790 12:26:25 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.790 12:26:25 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.790 ************************************ 00:06:20.790 START TEST event_reactor 00:06:20.790 ************************************ 00:06:20.790 12:26:25 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:20.790 [2024-11-19 12:26:25.885291] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:20.790 [2024-11-19 12:26:25.885433] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70281 ] 00:06:20.790 [2024-11-19 12:26:26.047516] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.048 [2024-11-19 12:26:26.116855] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.982 test_start 00:06:21.982 oneshot 00:06:21.982 tick 100 00:06:21.982 tick 100 00:06:21.982 tick 250 00:06:21.982 tick 100 00:06:21.982 tick 100 00:06:21.982 tick 100 00:06:21.982 tick 250 00:06:21.982 tick 500 00:06:21.982 tick 100 00:06:21.982 tick 100 00:06:21.982 tick 250 00:06:21.982 tick 100 00:06:21.982 tick 100 00:06:21.982 test_end 00:06:22.241 00:06:22.241 real 0m1.415s 00:06:22.241 user 0m1.187s 00:06:22.241 sys 0m0.121s 00:06:22.241 ************************************ 00:06:22.241 END TEST event_reactor 00:06:22.241 ************************************ 00:06:22.241 12:26:27 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.241 12:26:27 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:22.241 12:26:27 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:22.241 12:26:27 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:22.241 12:26:27 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.241 12:26:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.241 ************************************ 00:06:22.241 START TEST event_reactor_perf 00:06:22.241 ************************************ 00:06:22.241 12:26:27 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:22.241 [2024-11-19 12:26:27.369616] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:22.241 [2024-11-19 12:26:27.369740] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70312 ] 00:06:22.499 [2024-11-19 12:26:27.532984] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.499 [2024-11-19 12:26:27.604106] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.877 test_start 00:06:23.877 test_end 00:06:23.877 Performance: 401896 events per second 00:06:23.877 00:06:23.877 real 0m1.417s 00:06:23.877 user 0m1.191s 00:06:23.877 sys 0m0.118s 00:06:23.877 12:26:28 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.877 12:26:28 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:23.877 ************************************ 00:06:23.877 END TEST event_reactor_perf 00:06:23.877 ************************************ 00:06:23.877 12:26:28 event -- event/event.sh@49 -- # uname -s 00:06:23.877 12:26:28 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:23.877 12:26:28 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:23.877 12:26:28 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.877 12:26:28 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.877 12:26:28 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.877 ************************************ 00:06:23.877 START TEST event_scheduler 00:06:23.877 ************************************ 00:06:23.877 12:26:28 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:23.877 * Looking for test storage... 00:06:23.877 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:23.877 12:26:28 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:23.877 12:26:28 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:23.877 12:26:28 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:23.877 12:26:29 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:23.877 12:26:29 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.877 12:26:29 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.877 12:26:29 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.877 12:26:29 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.877 12:26:29 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.877 12:26:29 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.877 12:26:29 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.877 12:26:29 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.877 12:26:29 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.877 12:26:29 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.877 12:26:29 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.877 12:26:29 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:23.877 12:26:29 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:23.877 12:26:29 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.877 12:26:29 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.877 12:26:29 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:23.877 12:26:29 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:23.877 12:26:29 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.877 12:26:29 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:23.877 12:26:29 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.877 12:26:29 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:23.877 12:26:29 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:23.877 12:26:29 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.877 12:26:29 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:23.877 12:26:29 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.877 12:26:29 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.877 12:26:29 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.877 12:26:29 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:23.877 12:26:29 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.877 12:26:29 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:23.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.878 --rc genhtml_branch_coverage=1 00:06:23.878 --rc genhtml_function_coverage=1 00:06:23.878 --rc genhtml_legend=1 00:06:23.878 --rc geninfo_all_blocks=1 00:06:23.878 --rc geninfo_unexecuted_blocks=1 00:06:23.878 00:06:23.878 ' 00:06:23.878 12:26:29 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:23.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.878 --rc genhtml_branch_coverage=1 00:06:23.878 --rc genhtml_function_coverage=1 00:06:23.878 --rc genhtml_legend=1 00:06:23.878 --rc geninfo_all_blocks=1 00:06:23.878 --rc geninfo_unexecuted_blocks=1 00:06:23.878 00:06:23.878 ' 00:06:23.878 12:26:29 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:23.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.878 --rc genhtml_branch_coverage=1 00:06:23.878 --rc genhtml_function_coverage=1 00:06:23.878 --rc genhtml_legend=1 00:06:23.878 --rc geninfo_all_blocks=1 00:06:23.878 --rc geninfo_unexecuted_blocks=1 00:06:23.878 00:06:23.878 ' 00:06:23.878 12:26:29 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:23.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.878 --rc genhtml_branch_coverage=1 00:06:23.878 --rc genhtml_function_coverage=1 00:06:23.878 --rc genhtml_legend=1 00:06:23.878 --rc geninfo_all_blocks=1 00:06:23.878 --rc geninfo_unexecuted_blocks=1 00:06:23.878 00:06:23.878 ' 00:06:23.878 12:26:29 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:23.878 12:26:29 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70388 00:06:23.878 12:26:29 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:23.878 12:26:29 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:23.878 12:26:29 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70388 00:06:23.878 12:26:29 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70388 ']' 00:06:23.878 12:26:29 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.878 12:26:29 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.878 12:26:29 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.878 12:26:29 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.878 12:26:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:23.878 [2024-11-19 12:26:29.123606] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:23.878 [2024-11-19 12:26:29.123845] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70388 ] 00:06:24.145 [2024-11-19 12:26:29.290125] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:24.145 [2024-11-19 12:26:29.362834] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.145 [2024-11-19 12:26:29.363040] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.145 [2024-11-19 12:26:29.362982] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.145 [2024-11-19 12:26:29.363184] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.714 12:26:29 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.715 12:26:29 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:24.715 12:26:29 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:24.715 12:26:29 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.715 12:26:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:24.715 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:24.715 POWER: Cannot set governor of lcore 0 to userspace 00:06:24.715 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:24.715 POWER: Cannot set governor of lcore 0 to performance 00:06:24.715 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:24.715 POWER: Cannot set governor of lcore 0 to userspace 00:06:24.715 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:24.715 POWER: Cannot set governor of lcore 0 to userspace 00:06:24.715 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:24.715 POWER: Unable to set Power Management Environment for lcore 0 00:06:24.715 [2024-11-19 12:26:29.956075] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:24.715 [2024-11-19 12:26:29.956129] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:24.715 [2024-11-19 12:26:29.956188] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:24.715 [2024-11-19 12:26:29.956226] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:24.715 [2024-11-19 12:26:29.956257] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:24.715 [2024-11-19 12:26:29.956286] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:24.715 12:26:29 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.715 12:26:29 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:24.715 12:26:29 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.715 12:26:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:24.973 [2024-11-19 12:26:30.080857] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:24.973 12:26:30 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.973 12:26:30 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:24.973 12:26:30 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.973 12:26:30 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.973 12:26:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:24.973 ************************************ 00:06:24.973 START TEST scheduler_create_thread 00:06:24.973 ************************************ 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.973 2 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.973 3 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.973 4 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.973 5 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.973 6 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.973 7 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.973 8 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.973 9 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.973 10 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.973 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.541 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.541 12:26:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:25.541 12:26:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:25.541 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.541 12:26:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.480 12:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.480 12:26:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:26.480 12:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.480 12:26:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.416 12:26:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.416 12:26:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:27.416 12:26:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:27.416 12:26:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.416 12:26:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.353 ************************************ 00:06:28.353 END TEST scheduler_create_thread 00:06:28.353 ************************************ 00:06:28.353 12:26:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.353 00:06:28.353 real 0m3.221s 00:06:28.353 user 0m0.029s 00:06:28.353 sys 0m0.009s 00:06:28.353 12:26:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.353 12:26:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.353 12:26:33 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:28.353 12:26:33 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70388 00:06:28.353 12:26:33 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70388 ']' 00:06:28.353 12:26:33 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70388 00:06:28.353 12:26:33 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:28.353 12:26:33 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:28.353 12:26:33 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70388 00:06:28.353 killing process with pid 70388 00:06:28.353 12:26:33 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:28.353 12:26:33 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:28.353 12:26:33 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70388' 00:06:28.353 12:26:33 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70388 00:06:28.353 12:26:33 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70388 00:06:28.612 [2024-11-19 12:26:33.695856] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:29.181 00:06:29.181 real 0m5.325s 00:06:29.181 user 0m10.384s 00:06:29.181 sys 0m0.594s 00:06:29.181 ************************************ 00:06:29.181 END TEST event_scheduler 00:06:29.181 ************************************ 00:06:29.181 12:26:34 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.181 12:26:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:29.181 12:26:34 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:29.181 12:26:34 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:29.181 12:26:34 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.181 12:26:34 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.181 12:26:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.181 ************************************ 00:06:29.181 START TEST app_repeat 00:06:29.181 ************************************ 00:06:29.181 12:26:34 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:29.181 12:26:34 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.181 12:26:34 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.181 12:26:34 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:29.181 12:26:34 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.181 12:26:34 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:29.181 12:26:34 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:29.181 12:26:34 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:29.181 12:26:34 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70494 00:06:29.181 12:26:34 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:29.181 12:26:34 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:29.181 12:26:34 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70494' 00:06:29.181 Process app_repeat pid: 70494 00:06:29.181 12:26:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:29.181 12:26:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:29.181 spdk_app_start Round 0 00:06:29.181 12:26:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70494 /var/tmp/spdk-nbd.sock 00:06:29.181 12:26:34 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70494 ']' 00:06:29.181 12:26:34 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:29.181 12:26:34 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.181 12:26:34 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:29.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:29.181 12:26:34 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.181 12:26:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.181 [2024-11-19 12:26:34.277660] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:29.181 [2024-11-19 12:26:34.277877] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70494 ] 00:06:29.439 [2024-11-19 12:26:34.442953] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.439 [2024-11-19 12:26:34.516335] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.439 [2024-11-19 12:26:34.516463] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.007 12:26:35 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.007 12:26:35 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:30.007 12:26:35 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:30.266 Malloc0 00:06:30.266 12:26:35 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:30.525 Malloc1 00:06:30.525 12:26:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:30.525 12:26:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.525 12:26:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.525 12:26:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:30.525 12:26:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.525 12:26:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:30.525 12:26:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:30.525 12:26:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.525 12:26:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.525 12:26:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:30.525 12:26:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.525 12:26:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:30.525 12:26:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:30.525 12:26:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:30.525 12:26:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.525 12:26:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:30.784 /dev/nbd0 00:06:30.784 12:26:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:30.784 12:26:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:30.784 12:26:35 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:30.784 12:26:35 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:30.784 12:26:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:30.784 12:26:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:30.784 12:26:35 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:30.784 12:26:35 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:30.784 12:26:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:30.784 12:26:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:30.784 12:26:35 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.784 1+0 records in 00:06:30.784 1+0 records out 00:06:30.784 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563061 s, 7.3 MB/s 00:06:30.784 12:26:35 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.784 12:26:35 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:30.784 12:26:35 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.784 12:26:35 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:30.784 12:26:35 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:30.784 12:26:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.784 12:26:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.784 12:26:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:31.044 /dev/nbd1 00:06:31.044 12:26:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:31.044 12:26:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:31.044 12:26:36 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:31.044 12:26:36 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:31.044 12:26:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:31.044 12:26:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:31.044 12:26:36 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:31.044 12:26:36 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:31.044 12:26:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:31.044 12:26:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:31.044 12:26:36 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.044 1+0 records in 00:06:31.044 1+0 records out 00:06:31.044 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000491983 s, 8.3 MB/s 00:06:31.044 12:26:36 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.044 12:26:36 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:31.044 12:26:36 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.044 12:26:36 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:31.044 12:26:36 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:31.044 12:26:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.044 12:26:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.044 12:26:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.044 12:26:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.044 12:26:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.303 12:26:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:31.303 { 00:06:31.303 "nbd_device": "/dev/nbd0", 00:06:31.303 "bdev_name": "Malloc0" 00:06:31.303 }, 00:06:31.303 { 00:06:31.303 "nbd_device": "/dev/nbd1", 00:06:31.303 "bdev_name": "Malloc1" 00:06:31.303 } 00:06:31.303 ]' 00:06:31.303 12:26:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.303 12:26:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:31.303 { 00:06:31.303 "nbd_device": "/dev/nbd0", 00:06:31.303 "bdev_name": "Malloc0" 00:06:31.303 }, 00:06:31.303 { 00:06:31.303 "nbd_device": "/dev/nbd1", 00:06:31.303 "bdev_name": "Malloc1" 00:06:31.303 } 00:06:31.303 ]' 00:06:31.303 12:26:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:31.303 /dev/nbd1' 00:06:31.303 12:26:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:31.303 /dev/nbd1' 00:06:31.303 12:26:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.303 12:26:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:31.303 12:26:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:31.303 12:26:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:31.303 12:26:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:31.303 12:26:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:31.303 12:26:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.303 12:26:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.303 12:26:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:31.303 12:26:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:31.303 12:26:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:31.303 12:26:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:31.303 256+0 records in 00:06:31.303 256+0 records out 00:06:31.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130711 s, 80.2 MB/s 00:06:31.303 12:26:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.303 12:26:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:31.303 256+0 records in 00:06:31.303 256+0 records out 00:06:31.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239197 s, 43.8 MB/s 00:06:31.303 12:26:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.303 12:26:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:31.303 256+0 records in 00:06:31.303 256+0 records out 00:06:31.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221414 s, 47.4 MB/s 00:06:31.303 12:26:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:31.303 12:26:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.303 12:26:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.303 12:26:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:31.303 12:26:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:31.304 12:26:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:31.304 12:26:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:31.304 12:26:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.304 12:26:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:31.304 12:26:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.304 12:26:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:31.304 12:26:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:31.304 12:26:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:31.304 12:26:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.304 12:26:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.304 12:26:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:31.304 12:26:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:31.304 12:26:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.304 12:26:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:31.563 12:26:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:31.563 12:26:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:31.563 12:26:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:31.563 12:26:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.563 12:26:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.563 12:26:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:31.563 12:26:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:31.563 12:26:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.563 12:26:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.563 12:26:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:31.821 12:26:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:31.821 12:26:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:31.821 12:26:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:31.821 12:26:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.821 12:26:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.821 12:26:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:31.821 12:26:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:31.821 12:26:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.821 12:26:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.821 12:26:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.821 12:26:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.080 12:26:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:32.080 12:26:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:32.080 12:26:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.080 12:26:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:32.080 12:26:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.080 12:26:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:32.080 12:26:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:32.080 12:26:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:32.080 12:26:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:32.080 12:26:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:32.080 12:26:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:32.080 12:26:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:32.080 12:26:37 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:32.338 12:26:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:32.597 [2024-11-19 12:26:37.787419] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.597 [2024-11-19 12:26:37.854893] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.597 [2024-11-19 12:26:37.854899] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.856 [2024-11-19 12:26:37.930640] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:32.856 [2024-11-19 12:26:37.930699] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:35.399 spdk_app_start Round 1 00:06:35.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:35.399 12:26:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:35.399 12:26:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:35.399 12:26:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70494 /var/tmp/spdk-nbd.sock 00:06:35.399 12:26:40 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70494 ']' 00:06:35.399 12:26:40 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:35.399 12:26:40 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.399 12:26:40 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:35.399 12:26:40 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.399 12:26:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:35.658 12:26:40 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.658 12:26:40 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:35.658 12:26:40 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.658 Malloc0 00:06:35.658 12:26:40 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.918 Malloc1 00:06:35.918 12:26:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.918 12:26:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.918 12:26:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.918 12:26:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:35.918 12:26:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.918 12:26:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:35.918 12:26:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.918 12:26:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.918 12:26:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.918 12:26:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:35.918 12:26:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.918 12:26:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:35.918 12:26:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:35.918 12:26:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:35.918 12:26:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.918 12:26:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:36.177 /dev/nbd0 00:06:36.177 12:26:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:36.177 12:26:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:36.177 12:26:41 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:36.177 12:26:41 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:36.177 12:26:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:36.177 12:26:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:36.177 12:26:41 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:36.177 12:26:41 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:36.177 12:26:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:36.177 12:26:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:36.177 12:26:41 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.177 1+0 records in 00:06:36.177 1+0 records out 00:06:36.177 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348945 s, 11.7 MB/s 00:06:36.177 12:26:41 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:36.177 12:26:41 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:36.177 12:26:41 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:36.177 12:26:41 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:36.177 12:26:41 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:36.177 12:26:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.177 12:26:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.177 12:26:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:36.436 /dev/nbd1 00:06:36.436 12:26:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:36.436 12:26:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:36.436 12:26:41 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:36.436 12:26:41 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:36.436 12:26:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:36.436 12:26:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:36.436 12:26:41 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:36.436 12:26:41 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:36.436 12:26:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:36.436 12:26:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:36.436 12:26:41 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.436 1+0 records in 00:06:36.436 1+0 records out 00:06:36.436 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351342 s, 11.7 MB/s 00:06:36.436 12:26:41 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:36.436 12:26:41 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:36.436 12:26:41 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:36.436 12:26:41 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:36.436 12:26:41 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:36.436 12:26:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.436 12:26:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.436 12:26:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.436 12:26:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.436 12:26:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.696 12:26:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:36.696 { 00:06:36.696 "nbd_device": "/dev/nbd0", 00:06:36.696 "bdev_name": "Malloc0" 00:06:36.696 }, 00:06:36.696 { 00:06:36.696 "nbd_device": "/dev/nbd1", 00:06:36.696 "bdev_name": "Malloc1" 00:06:36.696 } 00:06:36.696 ]' 00:06:36.696 12:26:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:36.696 { 00:06:36.696 "nbd_device": "/dev/nbd0", 00:06:36.696 "bdev_name": "Malloc0" 00:06:36.696 }, 00:06:36.696 { 00:06:36.696 "nbd_device": "/dev/nbd1", 00:06:36.696 "bdev_name": "Malloc1" 00:06:36.696 } 00:06:36.696 ]' 00:06:36.696 12:26:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.696 12:26:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:36.696 /dev/nbd1' 00:06:36.696 12:26:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:36.696 /dev/nbd1' 00:06:36.696 12:26:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.696 12:26:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:36.696 12:26:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:36.696 12:26:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:36.696 12:26:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:36.696 12:26:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:36.696 12:26:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.696 12:26:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.696 12:26:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:36.696 12:26:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:36.696 12:26:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:36.696 12:26:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:36.696 256+0 records in 00:06:36.696 256+0 records out 00:06:36.696 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00475876 s, 220 MB/s 00:06:36.696 12:26:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.696 12:26:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:36.696 256+0 records in 00:06:36.696 256+0 records out 00:06:36.696 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230513 s, 45.5 MB/s 00:06:36.696 12:26:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.696 12:26:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:36.696 256+0 records in 00:06:36.696 256+0 records out 00:06:36.696 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227064 s, 46.2 MB/s 00:06:36.696 12:26:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:36.696 12:26:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.696 12:26:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.696 12:26:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:36.696 12:26:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:36.696 12:26:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:36.696 12:26:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:36.696 12:26:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.696 12:26:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:36.955 12:26:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.956 12:26:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:36.956 12:26:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:36.956 12:26:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:36.956 12:26:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.956 12:26:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.956 12:26:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:36.956 12:26:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:36.956 12:26:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.956 12:26:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:36.956 12:26:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:36.956 12:26:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:36.956 12:26:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:36.956 12:26:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.956 12:26:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.956 12:26:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:36.956 12:26:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:36.956 12:26:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.956 12:26:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.956 12:26:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:37.215 12:26:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:37.215 12:26:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:37.215 12:26:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:37.215 12:26:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.215 12:26:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.215 12:26:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:37.215 12:26:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:37.215 12:26:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.215 12:26:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.215 12:26:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.215 12:26:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.473 12:26:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:37.473 12:26:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:37.473 12:26:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.473 12:26:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:37.473 12:26:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.473 12:26:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:37.473 12:26:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:37.473 12:26:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:37.473 12:26:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:37.473 12:26:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:37.473 12:26:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:37.473 12:26:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:37.473 12:26:42 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:37.732 12:26:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:37.991 [2024-11-19 12:26:43.248398] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:38.249 [2024-11-19 12:26:43.331657] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.249 [2024-11-19 12:26:43.331691] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.249 [2024-11-19 12:26:43.409963] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:38.249 [2024-11-19 12:26:43.410032] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:40.780 spdk_app_start Round 2 00:06:40.780 12:26:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:40.780 12:26:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:40.780 12:26:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70494 /var/tmp/spdk-nbd.sock 00:06:40.780 12:26:45 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70494 ']' 00:06:40.780 12:26:45 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:40.780 12:26:45 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.780 12:26:45 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:40.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:40.780 12:26:45 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.780 12:26:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:41.038 12:26:46 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.038 12:26:46 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:41.038 12:26:46 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:41.297 Malloc0 00:06:41.297 12:26:46 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:41.555 Malloc1 00:06:41.555 12:26:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.555 12:26:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.555 12:26:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.555 12:26:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:41.555 12:26:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.555 12:26:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:41.555 12:26:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.555 12:26:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.555 12:26:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.555 12:26:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:41.555 12:26:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.555 12:26:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:41.555 12:26:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:41.555 12:26:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:41.555 12:26:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.555 12:26:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:41.814 /dev/nbd0 00:06:41.814 12:26:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:41.814 12:26:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:41.814 12:26:46 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:41.814 12:26:46 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:41.814 12:26:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:41.814 12:26:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:41.814 12:26:46 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:41.814 12:26:46 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:41.814 12:26:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:41.814 12:26:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:41.814 12:26:46 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.814 1+0 records in 00:06:41.814 1+0 records out 00:06:41.814 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391709 s, 10.5 MB/s 00:06:41.814 12:26:46 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.814 12:26:46 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:41.814 12:26:46 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.814 12:26:46 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:41.814 12:26:46 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:41.814 12:26:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.814 12:26:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.814 12:26:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:42.072 /dev/nbd1 00:06:42.072 12:26:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:42.072 12:26:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:42.072 12:26:47 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:42.072 12:26:47 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:42.072 12:26:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:42.072 12:26:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:42.072 12:26:47 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:42.072 12:26:47 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:42.072 12:26:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:42.072 12:26:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:42.073 12:26:47 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:42.073 1+0 records in 00:06:42.073 1+0 records out 00:06:42.073 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372571 s, 11.0 MB/s 00:06:42.073 12:26:47 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:42.073 12:26:47 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:42.073 12:26:47 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:42.073 12:26:47 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:42.073 12:26:47 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:42.073 12:26:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:42.073 12:26:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:42.073 12:26:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.073 12:26:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.073 12:26:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:42.331 { 00:06:42.331 "nbd_device": "/dev/nbd0", 00:06:42.331 "bdev_name": "Malloc0" 00:06:42.331 }, 00:06:42.331 { 00:06:42.331 "nbd_device": "/dev/nbd1", 00:06:42.331 "bdev_name": "Malloc1" 00:06:42.331 } 00:06:42.331 ]' 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:42.331 { 00:06:42.331 "nbd_device": "/dev/nbd0", 00:06:42.331 "bdev_name": "Malloc0" 00:06:42.331 }, 00:06:42.331 { 00:06:42.331 "nbd_device": "/dev/nbd1", 00:06:42.331 "bdev_name": "Malloc1" 00:06:42.331 } 00:06:42.331 ]' 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:42.331 /dev/nbd1' 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:42.331 /dev/nbd1' 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:42.331 256+0 records in 00:06:42.331 256+0 records out 00:06:42.331 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125509 s, 83.5 MB/s 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:42.331 256+0 records in 00:06:42.331 256+0 records out 00:06:42.331 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242794 s, 43.2 MB/s 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:42.331 256+0 records in 00:06:42.331 256+0 records out 00:06:42.331 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257818 s, 40.7 MB/s 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:42.331 12:26:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.332 12:26:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.332 12:26:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:42.332 12:26:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:42.332 12:26:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.332 12:26:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:42.590 12:26:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:42.590 12:26:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:42.590 12:26:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:42.590 12:26:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.590 12:26:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.590 12:26:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:42.590 12:26:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.590 12:26:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.590 12:26:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.590 12:26:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:42.848 12:26:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:42.848 12:26:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:42.848 12:26:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:42.848 12:26:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.848 12:26:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.848 12:26:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:42.848 12:26:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.848 12:26:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.848 12:26:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.848 12:26:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.848 12:26:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:43.106 12:26:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:43.106 12:26:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:43.106 12:26:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:43.106 12:26:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:43.106 12:26:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:43.106 12:26:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:43.106 12:26:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:43.106 12:26:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:43.106 12:26:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:43.106 12:26:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:43.106 12:26:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:43.106 12:26:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:43.106 12:26:48 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:43.365 12:26:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:43.623 [2024-11-19 12:26:48.682551] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.623 [2024-11-19 12:26:48.753105] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.623 [2024-11-19 12:26:48.753111] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.623 [2024-11-19 12:26:48.828663] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:43.623 [2024-11-19 12:26:48.828724] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:46.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:46.157 12:26:51 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70494 /var/tmp/spdk-nbd.sock 00:06:46.157 12:26:51 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70494 ']' 00:06:46.157 12:26:51 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:46.157 12:26:51 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.157 12:26:51 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:46.157 12:26:51 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.157 12:26:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:46.417 12:26:51 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.417 12:26:51 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:46.417 12:26:51 event.app_repeat -- event/event.sh@39 -- # killprocess 70494 00:06:46.417 12:26:51 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70494 ']' 00:06:46.417 12:26:51 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70494 00:06:46.417 12:26:51 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:46.417 12:26:51 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:46.417 12:26:51 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70494 00:06:46.417 12:26:51 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:46.417 12:26:51 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:46.417 12:26:51 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70494' 00:06:46.417 killing process with pid 70494 00:06:46.417 12:26:51 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70494 00:06:46.417 12:26:51 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70494 00:06:46.677 spdk_app_start is called in Round 0. 00:06:46.677 Shutdown signal received, stop current app iteration 00:06:46.677 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:46.677 spdk_app_start is called in Round 1. 00:06:46.677 Shutdown signal received, stop current app iteration 00:06:46.677 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:46.677 spdk_app_start is called in Round 2. 00:06:46.677 Shutdown signal received, stop current app iteration 00:06:46.677 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:46.677 spdk_app_start is called in Round 3. 00:06:46.677 Shutdown signal received, stop current app iteration 00:06:46.677 12:26:51 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:46.677 12:26:51 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:46.677 00:06:46.677 real 0m17.654s 00:06:46.677 user 0m38.114s 00:06:46.677 sys 0m3.116s 00:06:46.677 12:26:51 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.677 12:26:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:46.677 ************************************ 00:06:46.677 END TEST app_repeat 00:06:46.677 ************************************ 00:06:46.677 12:26:51 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:46.677 12:26:51 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:46.677 12:26:51 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.677 12:26:51 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.677 12:26:51 event -- common/autotest_common.sh@10 -- # set +x 00:06:46.938 ************************************ 00:06:46.938 START TEST cpu_locks 00:06:46.938 ************************************ 00:06:46.938 12:26:51 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:46.938 * Looking for test storage... 00:06:46.938 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:46.938 12:26:52 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:46.938 12:26:52 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:46.938 12:26:52 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:46.938 12:26:52 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:46.938 12:26:52 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.938 12:26:52 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.938 12:26:52 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.938 12:26:52 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.938 12:26:52 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.938 12:26:52 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.938 12:26:52 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.938 12:26:52 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.938 12:26:52 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.938 12:26:52 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.938 12:26:52 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.938 12:26:52 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:46.938 12:26:52 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:46.938 12:26:52 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.938 12:26:52 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.938 12:26:52 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:46.938 12:26:52 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:46.938 12:26:52 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.938 12:26:52 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:46.938 12:26:52 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.938 12:26:52 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:46.938 12:26:52 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:46.938 12:26:52 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.938 12:26:52 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:46.938 12:26:52 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.938 12:26:52 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.938 12:26:52 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.938 12:26:52 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:46.938 12:26:52 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.938 12:26:52 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:46.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.938 --rc genhtml_branch_coverage=1 00:06:46.938 --rc genhtml_function_coverage=1 00:06:46.938 --rc genhtml_legend=1 00:06:46.938 --rc geninfo_all_blocks=1 00:06:46.938 --rc geninfo_unexecuted_blocks=1 00:06:46.938 00:06:46.938 ' 00:06:46.938 12:26:52 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:46.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.938 --rc genhtml_branch_coverage=1 00:06:46.938 --rc genhtml_function_coverage=1 00:06:46.938 --rc genhtml_legend=1 00:06:46.938 --rc geninfo_all_blocks=1 00:06:46.938 --rc geninfo_unexecuted_blocks=1 00:06:46.938 00:06:46.938 ' 00:06:46.938 12:26:52 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:46.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.938 --rc genhtml_branch_coverage=1 00:06:46.938 --rc genhtml_function_coverage=1 00:06:46.938 --rc genhtml_legend=1 00:06:46.938 --rc geninfo_all_blocks=1 00:06:46.938 --rc geninfo_unexecuted_blocks=1 00:06:46.938 00:06:46.938 ' 00:06:46.938 12:26:52 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:46.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.938 --rc genhtml_branch_coverage=1 00:06:46.938 --rc genhtml_function_coverage=1 00:06:46.938 --rc genhtml_legend=1 00:06:46.938 --rc geninfo_all_blocks=1 00:06:46.938 --rc geninfo_unexecuted_blocks=1 00:06:46.938 00:06:46.938 ' 00:06:46.938 12:26:52 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:46.938 12:26:52 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:46.938 12:26:52 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:46.938 12:26:52 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:46.938 12:26:52 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.938 12:26:52 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.938 12:26:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.938 ************************************ 00:06:46.938 START TEST default_locks 00:06:46.938 ************************************ 00:06:46.938 12:26:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:46.938 12:26:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70921 00:06:46.938 12:26:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:46.938 12:26:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70921 00:06:46.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.938 12:26:52 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70921 ']' 00:06:46.938 12:26:52 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.938 12:26:52 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.938 12:26:52 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.938 12:26:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.938 12:26:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.197 [2024-11-19 12:26:52.284320] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:47.197 [2024-11-19 12:26:52.284430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70921 ] 00:06:47.197 [2024-11-19 12:26:52.443353] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.455 [2024-11-19 12:26:52.497069] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.023 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.023 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:48.023 12:26:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70921 00:06:48.023 12:26:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70921 00:06:48.023 12:26:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:48.282 12:26:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70921 00:06:48.282 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 70921 ']' 00:06:48.282 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 70921 00:06:48.282 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:48.282 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:48.282 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70921 00:06:48.282 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:48.282 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:48.282 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70921' 00:06:48.282 killing process with pid 70921 00:06:48.282 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 70921 00:06:48.282 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 70921 00:06:48.848 12:26:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70921 00:06:48.848 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:48.848 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70921 00:06:48.848 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:48.848 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.848 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:48.848 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.848 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 70921 00:06:48.848 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70921 ']' 00:06:48.848 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.848 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.848 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.848 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.848 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.848 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (70921) - No such process 00:06:48.848 ERROR: process (pid: 70921) is no longer running 00:06:48.848 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.848 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:48.848 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:48.848 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:48.848 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:48.848 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:48.848 12:26:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:48.848 12:26:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:48.848 12:26:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:48.848 12:26:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:48.848 ************************************ 00:06:48.848 END TEST default_locks 00:06:48.848 ************************************ 00:06:48.848 00:06:48.848 real 0m1.670s 00:06:48.848 user 0m1.653s 00:06:48.848 sys 0m0.565s 00:06:48.848 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.848 12:26:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.848 12:26:53 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:48.848 12:26:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.848 12:26:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.848 12:26:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.848 ************************************ 00:06:48.848 START TEST default_locks_via_rpc 00:06:48.848 ************************************ 00:06:48.848 12:26:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:48.848 12:26:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70969 00:06:48.848 12:26:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70969 00:06:48.848 12:26:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:48.848 12:26:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70969 ']' 00:06:48.848 12:26:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.848 12:26:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.848 12:26:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.848 12:26:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.848 12:26:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.848 [2024-11-19 12:26:54.024604] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:48.848 [2024-11-19 12:26:54.024729] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70969 ] 00:06:49.107 [2024-11-19 12:26:54.184084] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.107 [2024-11-19 12:26:54.238195] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.676 12:26:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.676 12:26:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:49.676 12:26:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:49.676 12:26:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.676 12:26:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.676 12:26:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.676 12:26:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:49.676 12:26:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:49.676 12:26:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:49.676 12:26:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:49.676 12:26:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:49.676 12:26:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.677 12:26:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.677 12:26:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.677 12:26:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70969 00:06:49.677 12:26:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70969 00:06:49.677 12:26:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:49.936 12:26:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70969 00:06:49.936 12:26:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 70969 ']' 00:06:49.936 12:26:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 70969 00:06:49.936 12:26:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:49.936 12:26:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.936 12:26:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70969 00:06:50.196 killing process with pid 70969 00:06:50.196 12:26:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:50.196 12:26:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:50.196 12:26:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70969' 00:06:50.196 12:26:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 70969 00:06:50.196 12:26:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 70969 00:06:50.456 ************************************ 00:06:50.457 END TEST default_locks_via_rpc 00:06:50.457 ************************************ 00:06:50.457 00:06:50.457 real 0m1.674s 00:06:50.457 user 0m1.651s 00:06:50.457 sys 0m0.568s 00:06:50.457 12:26:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.457 12:26:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.457 12:26:55 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:50.457 12:26:55 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:50.457 12:26:55 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.457 12:26:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.457 ************************************ 00:06:50.457 START TEST non_locking_app_on_locked_coremask 00:06:50.457 ************************************ 00:06:50.457 12:26:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:50.457 12:26:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71021 00:06:50.457 12:26:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:50.457 12:26:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71021 /var/tmp/spdk.sock 00:06:50.457 12:26:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71021 ']' 00:06:50.457 12:26:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.457 12:26:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.457 12:26:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.457 12:26:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.457 12:26:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.717 [2024-11-19 12:26:55.776456] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:50.717 [2024-11-19 12:26:55.776723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71021 ] 00:06:50.717 [2024-11-19 12:26:55.942939] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.978 [2024-11-19 12:26:55.996922] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.550 12:26:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.550 12:26:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:51.550 12:26:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71038 00:06:51.550 12:26:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:51.550 12:26:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71038 /var/tmp/spdk2.sock 00:06:51.550 12:26:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71038 ']' 00:06:51.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.550 12:26:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.550 12:26:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.550 12:26:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.550 12:26:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.550 12:26:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.550 [2024-11-19 12:26:56.714142] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:51.550 [2024-11-19 12:26:56.714426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71038 ] 00:06:51.810 [2024-11-19 12:26:56.868195] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:51.810 [2024-11-19 12:26:56.868298] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.810 [2024-11-19 12:26:56.977954] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.385 12:26:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.385 12:26:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:52.385 12:26:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71021 00:06:52.385 12:26:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71021 00:06:52.385 12:26:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:52.647 12:26:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71021 00:06:52.647 12:26:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71021 ']' 00:06:52.647 12:26:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71021 00:06:52.647 12:26:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:52.647 12:26:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:52.647 12:26:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71021 00:06:52.647 12:26:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:52.647 12:26:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:52.647 12:26:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71021' 00:06:52.647 killing process with pid 71021 00:06:52.647 12:26:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71021 00:06:52.647 12:26:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71021 00:06:53.585 12:26:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71038 00:06:53.585 12:26:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71038 ']' 00:06:53.585 12:26:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71038 00:06:53.585 12:26:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:53.585 12:26:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.585 12:26:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71038 00:06:53.585 killing process with pid 71038 00:06:53.585 12:26:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:53.585 12:26:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:53.585 12:26:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71038' 00:06:53.585 12:26:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71038 00:06:53.585 12:26:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71038 00:06:53.845 ************************************ 00:06:53.845 END TEST non_locking_app_on_locked_coremask 00:06:53.845 ************************************ 00:06:53.845 00:06:53.845 real 0m3.427s 00:06:53.845 user 0m3.583s 00:06:53.845 sys 0m1.071s 00:06:53.845 12:26:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.845 12:26:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.105 12:26:59 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:54.105 12:26:59 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:54.105 12:26:59 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.105 12:26:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.105 ************************************ 00:06:54.105 START TEST locking_app_on_unlocked_coremask 00:06:54.105 ************************************ 00:06:54.105 12:26:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:54.105 12:26:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71096 00:06:54.105 12:26:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:54.105 12:26:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71096 /var/tmp/spdk.sock 00:06:54.105 12:26:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71096 ']' 00:06:54.105 12:26:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.105 12:26:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.105 12:26:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.105 12:26:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.105 12:26:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.105 [2024-11-19 12:26:59.280563] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:54.105 [2024-11-19 12:26:59.280724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71096 ] 00:06:54.365 [2024-11-19 12:26:59.448351] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:54.365 [2024-11-19 12:26:59.448427] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.365 [2024-11-19 12:26:59.494846] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.935 12:27:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.935 12:27:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:54.935 12:27:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:54.935 12:27:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71112 00:06:54.935 12:27:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71112 /var/tmp/spdk2.sock 00:06:54.935 12:27:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71112 ']' 00:06:54.935 12:27:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.935 12:27:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.935 12:27:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.935 12:27:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.935 12:27:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.195 [2024-11-19 12:27:00.233571] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:55.195 [2024-11-19 12:27:00.233864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71112 ] 00:06:55.196 [2024-11-19 12:27:00.392044] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.455 [2024-11-19 12:27:00.503019] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.025 12:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.025 12:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:56.025 12:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71112 00:06:56.025 12:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71112 00:06:56.025 12:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:56.963 12:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71096 00:06:56.963 12:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71096 ']' 00:06:56.963 12:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71096 00:06:56.963 12:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:56.963 12:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:56.963 12:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71096 00:06:56.963 killing process with pid 71096 00:06:56.963 12:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:56.963 12:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:56.963 12:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71096' 00:06:56.963 12:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71096 00:06:56.963 12:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71096 00:06:57.534 12:27:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71112 00:06:57.534 12:27:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71112 ']' 00:06:57.534 12:27:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71112 00:06:57.534 12:27:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:57.534 12:27:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.534 12:27:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71112 00:06:57.534 killing process with pid 71112 00:06:57.534 12:27:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.534 12:27:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.534 12:27:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71112' 00:06:57.534 12:27:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71112 00:06:57.534 12:27:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71112 00:06:58.108 ************************************ 00:06:58.108 END TEST locking_app_on_unlocked_coremask 00:06:58.108 ************************************ 00:06:58.108 00:06:58.108 real 0m3.968s 00:06:58.108 user 0m4.206s 00:06:58.108 sys 0m1.268s 00:06:58.108 12:27:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.108 12:27:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.108 12:27:03 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:58.108 12:27:03 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.108 12:27:03 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.108 12:27:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.108 ************************************ 00:06:58.108 START TEST locking_app_on_locked_coremask 00:06:58.108 ************************************ 00:06:58.108 12:27:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:58.108 12:27:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71181 00:06:58.108 12:27:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:58.108 12:27:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71181 /var/tmp/spdk.sock 00:06:58.108 12:27:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71181 ']' 00:06:58.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.108 12:27:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.108 12:27:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.108 12:27:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.108 12:27:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.108 12:27:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.108 [2024-11-19 12:27:03.321875] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:58.108 [2024-11-19 12:27:03.322162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71181 ] 00:06:58.374 [2024-11-19 12:27:03.480528] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.374 [2024-11-19 12:27:03.530407] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.943 12:27:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.943 12:27:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:58.943 12:27:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:58.943 12:27:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71197 00:06:58.944 12:27:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71197 /var/tmp/spdk2.sock 00:06:58.944 12:27:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:58.944 12:27:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71197 /var/tmp/spdk2.sock 00:06:58.944 12:27:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:58.944 12:27:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.944 12:27:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:58.944 12:27:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.944 12:27:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71197 /var/tmp/spdk2.sock 00:06:58.944 12:27:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71197 ']' 00:06:58.944 12:27:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.944 12:27:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.944 12:27:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.944 12:27:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.944 12:27:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.203 [2024-11-19 12:27:04.218867] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:59.203 [2024-11-19 12:27:04.219122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71197 ] 00:06:59.203 [2024-11-19 12:27:04.372178] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71181 has claimed it. 00:06:59.203 [2024-11-19 12:27:04.372260] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:59.773 ERROR: process (pid: 71197) is no longer running 00:06:59.773 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71197) - No such process 00:06:59.773 12:27:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.773 12:27:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:59.773 12:27:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:59.773 12:27:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:59.773 12:27:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:59.773 12:27:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:59.773 12:27:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71181 00:06:59.773 12:27:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71181 00:06:59.773 12:27:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:00.343 12:27:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71181 00:07:00.343 12:27:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71181 ']' 00:07:00.343 12:27:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71181 00:07:00.343 12:27:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:00.343 12:27:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.343 12:27:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71181 00:07:00.343 12:27:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:00.343 12:27:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:00.343 12:27:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71181' 00:07:00.343 killing process with pid 71181 00:07:00.343 12:27:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71181 00:07:00.343 12:27:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71181 00:07:00.603 00:07:00.603 real 0m2.548s 00:07:00.603 user 0m2.736s 00:07:00.603 sys 0m0.780s 00:07:00.603 12:27:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.603 12:27:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.603 ************************************ 00:07:00.603 END TEST locking_app_on_locked_coremask 00:07:00.603 ************************************ 00:07:00.603 12:27:05 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:00.603 12:27:05 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.603 12:27:05 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.603 12:27:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.603 ************************************ 00:07:00.603 START TEST locking_overlapped_coremask 00:07:00.603 ************************************ 00:07:00.603 12:27:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:00.603 12:27:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71252 00:07:00.603 12:27:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:00.603 12:27:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71252 /var/tmp/spdk.sock 00:07:00.603 12:27:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71252 ']' 00:07:00.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.603 12:27:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.604 12:27:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.604 12:27:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.604 12:27:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.604 12:27:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.863 [2024-11-19 12:27:05.921064] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:00.863 [2024-11-19 12:27:05.921284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71252 ] 00:07:00.863 [2024-11-19 12:27:06.083469] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:01.123 [2024-11-19 12:27:06.134419] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.123 [2024-11-19 12:27:06.134601] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.123 [2024-11-19 12:27:06.134797] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.693 12:27:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.693 12:27:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:01.693 12:27:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:01.693 12:27:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71259 00:07:01.694 12:27:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71259 /var/tmp/spdk2.sock 00:07:01.694 12:27:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:01.694 12:27:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71259 /var/tmp/spdk2.sock 00:07:01.694 12:27:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:01.694 12:27:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.694 12:27:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:01.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:01.694 12:27:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.694 12:27:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71259 /var/tmp/spdk2.sock 00:07:01.694 12:27:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71259 ']' 00:07:01.694 12:27:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:01.694 12:27:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.694 12:27:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:01.694 12:27:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.694 12:27:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.694 [2024-11-19 12:27:06.819590] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:01.694 [2024-11-19 12:27:06.819826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71259 ] 00:07:01.953 [2024-11-19 12:27:06.971241] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71252 has claimed it. 00:07:01.953 [2024-11-19 12:27:06.971325] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:02.524 ERROR: process (pid: 71259) is no longer running 00:07:02.524 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71259) - No such process 00:07:02.524 12:27:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.524 12:27:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:02.524 12:27:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:02.524 12:27:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.524 12:27:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:02.524 12:27:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.524 12:27:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:02.524 12:27:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:02.524 12:27:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:02.524 12:27:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:02.524 12:27:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71252 00:07:02.524 12:27:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71252 ']' 00:07:02.524 12:27:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71252 00:07:02.524 12:27:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:02.524 12:27:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:02.524 12:27:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71252 00:07:02.524 12:27:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:02.524 12:27:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:02.524 12:27:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71252' 00:07:02.524 killing process with pid 71252 00:07:02.524 12:27:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71252 00:07:02.524 12:27:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71252 00:07:02.785 00:07:02.785 real 0m2.107s 00:07:02.785 user 0m5.551s 00:07:02.785 sys 0m0.543s 00:07:02.785 12:27:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.785 12:27:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.785 ************************************ 00:07:02.785 END TEST locking_overlapped_coremask 00:07:02.785 ************************************ 00:07:02.785 12:27:07 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:02.785 12:27:07 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.785 12:27:07 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.785 12:27:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.785 ************************************ 00:07:02.785 START TEST locking_overlapped_coremask_via_rpc 00:07:02.785 ************************************ 00:07:02.785 12:27:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:02.785 12:27:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71314 00:07:02.785 12:27:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71314 /var/tmp/spdk.sock 00:07:02.785 12:27:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:02.785 12:27:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71314 ']' 00:07:02.785 12:27:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.785 12:27:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.785 12:27:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.785 12:27:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.785 12:27:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.045 [2024-11-19 12:27:08.109739] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:03.045 [2024-11-19 12:27:08.109915] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71314 ] 00:07:03.045 [2024-11-19 12:27:08.263945] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:03.045 [2024-11-19 12:27:08.264005] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:03.305 [2024-11-19 12:27:08.310924] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.305 [2024-11-19 12:27:08.311079] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.305 [2024-11-19 12:27:08.311217] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.921 12:27:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.921 12:27:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:03.921 12:27:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:03.921 12:27:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71331 00:07:03.921 12:27:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71331 /var/tmp/spdk2.sock 00:07:03.921 12:27:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71331 ']' 00:07:03.921 12:27:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.921 12:27:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.921 12:27:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.921 12:27:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.921 12:27:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.921 [2024-11-19 12:27:08.987251] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:03.921 [2024-11-19 12:27:08.987458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71331 ] 00:07:03.921 [2024-11-19 12:27:09.136691] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:03.921 [2024-11-19 12:27:09.140825] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:04.178 [2024-11-19 12:27:09.316266] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:04.178 [2024-11-19 12:27:09.316309] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.178 [2024-11-19 12:27:09.316415] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.113 [2024-11-19 12:27:10.034040] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71314 has claimed it. 00:07:05.113 request: 00:07:05.113 { 00:07:05.113 "method": "framework_enable_cpumask_locks", 00:07:05.113 "req_id": 1 00:07:05.113 } 00:07:05.113 Got JSON-RPC error response 00:07:05.113 response: 00:07:05.113 { 00:07:05.113 "code": -32603, 00:07:05.113 "message": "Failed to claim CPU core: 2" 00:07:05.113 } 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71314 /var/tmp/spdk.sock 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71314 ']' 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:05.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71331 /var/tmp/spdk2.sock 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71331 ']' 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.113 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.372 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.372 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:05.372 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:05.372 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:05.372 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:05.372 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:05.372 00:07:05.372 real 0m2.484s 00:07:05.372 user 0m1.088s 00:07:05.372 sys 0m0.192s 00:07:05.372 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.372 12:27:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.372 ************************************ 00:07:05.372 END TEST locking_overlapped_coremask_via_rpc 00:07:05.372 ************************************ 00:07:05.372 12:27:10 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:05.372 12:27:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71314 ]] 00:07:05.372 12:27:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71314 00:07:05.372 12:27:10 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71314 ']' 00:07:05.372 12:27:10 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71314 00:07:05.372 12:27:10 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:05.372 12:27:10 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.372 12:27:10 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71314 00:07:05.372 killing process with pid 71314 00:07:05.372 12:27:10 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:05.372 12:27:10 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:05.372 12:27:10 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71314' 00:07:05.372 12:27:10 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71314 00:07:05.372 12:27:10 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71314 00:07:05.940 12:27:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71331 ]] 00:07:05.940 12:27:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71331 00:07:05.940 12:27:11 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71331 ']' 00:07:05.940 12:27:11 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71331 00:07:05.940 12:27:11 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:05.940 12:27:11 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.940 12:27:11 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71331 00:07:05.940 killing process with pid 71331 00:07:05.940 12:27:11 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:05.940 12:27:11 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:05.940 12:27:11 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71331' 00:07:05.940 12:27:11 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71331 00:07:05.940 12:27:11 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71331 00:07:06.508 12:27:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:06.509 Process with pid 71314 is not found 00:07:06.509 12:27:11 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:06.509 12:27:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71314 ]] 00:07:06.509 12:27:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71314 00:07:06.509 12:27:11 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71314 ']' 00:07:06.509 12:27:11 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71314 00:07:06.509 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71314) - No such process 00:07:06.509 12:27:11 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71314 is not found' 00:07:06.509 12:27:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71331 ]] 00:07:06.509 12:27:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71331 00:07:06.509 12:27:11 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71331 ']' 00:07:06.509 12:27:11 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71331 00:07:06.509 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71331) - No such process 00:07:06.509 Process with pid 71331 is not found 00:07:06.509 12:27:11 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71331 is not found' 00:07:06.509 12:27:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:06.509 00:07:06.509 real 0m19.777s 00:07:06.509 user 0m33.236s 00:07:06.509 sys 0m6.339s 00:07:06.509 12:27:11 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.509 12:27:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.509 ************************************ 00:07:06.509 END TEST cpu_locks 00:07:06.509 ************************************ 00:07:06.768 00:07:06.768 real 0m47.678s 00:07:06.768 user 1m28.519s 00:07:06.768 sys 0m10.858s 00:07:06.768 12:27:11 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.768 12:27:11 event -- common/autotest_common.sh@10 -- # set +x 00:07:06.768 ************************************ 00:07:06.768 END TEST event 00:07:06.768 ************************************ 00:07:06.768 12:27:11 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:06.768 12:27:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.768 12:27:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.768 12:27:11 -- common/autotest_common.sh@10 -- # set +x 00:07:06.768 ************************************ 00:07:06.768 START TEST thread 00:07:06.768 ************************************ 00:07:06.768 12:27:11 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:06.768 * Looking for test storage... 00:07:06.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:06.768 12:27:11 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:06.768 12:27:11 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:06.768 12:27:11 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:07.028 12:27:12 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:07.028 12:27:12 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.028 12:27:12 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.028 12:27:12 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.028 12:27:12 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.028 12:27:12 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.028 12:27:12 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.028 12:27:12 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.028 12:27:12 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.028 12:27:12 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.028 12:27:12 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.028 12:27:12 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.028 12:27:12 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:07.028 12:27:12 thread -- scripts/common.sh@345 -- # : 1 00:07:07.028 12:27:12 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.028 12:27:12 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.028 12:27:12 thread -- scripts/common.sh@365 -- # decimal 1 00:07:07.028 12:27:12 thread -- scripts/common.sh@353 -- # local d=1 00:07:07.028 12:27:12 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.028 12:27:12 thread -- scripts/common.sh@355 -- # echo 1 00:07:07.028 12:27:12 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.028 12:27:12 thread -- scripts/common.sh@366 -- # decimal 2 00:07:07.028 12:27:12 thread -- scripts/common.sh@353 -- # local d=2 00:07:07.028 12:27:12 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.028 12:27:12 thread -- scripts/common.sh@355 -- # echo 2 00:07:07.028 12:27:12 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.028 12:27:12 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.028 12:27:12 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.028 12:27:12 thread -- scripts/common.sh@368 -- # return 0 00:07:07.028 12:27:12 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.028 12:27:12 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:07.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.028 --rc genhtml_branch_coverage=1 00:07:07.028 --rc genhtml_function_coverage=1 00:07:07.028 --rc genhtml_legend=1 00:07:07.029 --rc geninfo_all_blocks=1 00:07:07.029 --rc geninfo_unexecuted_blocks=1 00:07:07.029 00:07:07.029 ' 00:07:07.029 12:27:12 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:07.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.029 --rc genhtml_branch_coverage=1 00:07:07.029 --rc genhtml_function_coverage=1 00:07:07.029 --rc genhtml_legend=1 00:07:07.029 --rc geninfo_all_blocks=1 00:07:07.029 --rc geninfo_unexecuted_blocks=1 00:07:07.029 00:07:07.029 ' 00:07:07.029 12:27:12 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:07.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.029 --rc genhtml_branch_coverage=1 00:07:07.029 --rc genhtml_function_coverage=1 00:07:07.029 --rc genhtml_legend=1 00:07:07.029 --rc geninfo_all_blocks=1 00:07:07.029 --rc geninfo_unexecuted_blocks=1 00:07:07.029 00:07:07.029 ' 00:07:07.029 12:27:12 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:07.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.029 --rc genhtml_branch_coverage=1 00:07:07.029 --rc genhtml_function_coverage=1 00:07:07.029 --rc genhtml_legend=1 00:07:07.029 --rc geninfo_all_blocks=1 00:07:07.029 --rc geninfo_unexecuted_blocks=1 00:07:07.029 00:07:07.029 ' 00:07:07.029 12:27:12 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:07.029 12:27:12 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:07.029 12:27:12 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.029 12:27:12 thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.029 ************************************ 00:07:07.029 START TEST thread_poller_perf 00:07:07.029 ************************************ 00:07:07.029 12:27:12 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:07.029 [2024-11-19 12:27:12.107509] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:07.029 [2024-11-19 12:27:12.107632] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71469 ] 00:07:07.029 [2024-11-19 12:27:12.266989] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.289 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:07.289 [2024-11-19 12:27:12.312008] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.229 [2024-11-19T12:27:13.490Z] ====================================== 00:07:08.229 [2024-11-19T12:27:13.490Z] busy:2297642670 (cyc) 00:07:08.229 [2024-11-19T12:27:13.490Z] total_run_count: 413000 00:07:08.229 [2024-11-19T12:27:13.490Z] tsc_hz: 2290000000 (cyc) 00:07:08.229 [2024-11-19T12:27:13.490Z] ====================================== 00:07:08.229 [2024-11-19T12:27:13.490Z] poller_cost: 5563 (cyc), 2429 (nsec) 00:07:08.229 ************************************ 00:07:08.229 END TEST thread_poller_perf 00:07:08.229 00:07:08.229 real 0m1.345s 00:07:08.229 user 0m1.149s 00:07:08.229 sys 0m0.090s 00:07:08.229 12:27:13 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.229 12:27:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:08.229 ************************************ 00:07:08.229 12:27:13 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:08.229 12:27:13 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:08.229 12:27:13 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.229 12:27:13 thread -- common/autotest_common.sh@10 -- # set +x 00:07:08.229 ************************************ 00:07:08.229 START TEST thread_poller_perf 00:07:08.229 ************************************ 00:07:08.229 12:27:13 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:08.489 [2024-11-19 12:27:13.523822] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:08.489 [2024-11-19 12:27:13.523982] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71501 ] 00:07:08.489 [2024-11-19 12:27:13.688701] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.489 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:08.489 [2024-11-19 12:27:13.735592] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.870 [2024-11-19T12:27:15.131Z] ====================================== 00:07:09.870 [2024-11-19T12:27:15.131Z] busy:2293347596 (cyc) 00:07:09.870 [2024-11-19T12:27:15.131Z] total_run_count: 5227000 00:07:09.870 [2024-11-19T12:27:15.131Z] tsc_hz: 2290000000 (cyc) 00:07:09.870 [2024-11-19T12:27:15.131Z] ====================================== 00:07:09.870 [2024-11-19T12:27:15.132Z] poller_cost: 438 (cyc), 191 (nsec) 00:07:09.871 00:07:09.871 real 0m1.357s 00:07:09.871 user 0m1.144s 00:07:09.871 sys 0m0.107s 00:07:09.871 12:27:14 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.871 ************************************ 00:07:09.871 END TEST thread_poller_perf 00:07:09.871 ************************************ 00:07:09.871 12:27:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:09.871 12:27:14 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:09.871 ************************************ 00:07:09.871 END TEST thread 00:07:09.871 ************************************ 00:07:09.871 00:07:09.871 real 0m3.057s 00:07:09.871 user 0m2.459s 00:07:09.871 sys 0m0.397s 00:07:09.871 12:27:14 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.871 12:27:14 thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.871 12:27:14 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:09.871 12:27:14 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:09.871 12:27:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:09.871 12:27:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.871 12:27:14 -- common/autotest_common.sh@10 -- # set +x 00:07:09.871 ************************************ 00:07:09.871 START TEST app_cmdline 00:07:09.871 ************************************ 00:07:09.871 12:27:14 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:09.871 * Looking for test storage... 00:07:09.871 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:09.871 12:27:15 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:09.871 12:27:15 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:09.871 12:27:15 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:10.131 12:27:15 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:10.131 12:27:15 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.131 12:27:15 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.131 12:27:15 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.131 12:27:15 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.131 12:27:15 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.131 12:27:15 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.131 12:27:15 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.131 12:27:15 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.131 12:27:15 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.131 12:27:15 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.131 12:27:15 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.131 12:27:15 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:10.131 12:27:15 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:10.131 12:27:15 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.131 12:27:15 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.131 12:27:15 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:10.131 12:27:15 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:10.131 12:27:15 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.131 12:27:15 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:10.131 12:27:15 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.131 12:27:15 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:10.131 12:27:15 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:10.131 12:27:15 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.131 12:27:15 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:10.131 12:27:15 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.131 12:27:15 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.131 12:27:15 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.131 12:27:15 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:10.131 12:27:15 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.131 12:27:15 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:10.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.131 --rc genhtml_branch_coverage=1 00:07:10.131 --rc genhtml_function_coverage=1 00:07:10.131 --rc genhtml_legend=1 00:07:10.131 --rc geninfo_all_blocks=1 00:07:10.131 --rc geninfo_unexecuted_blocks=1 00:07:10.131 00:07:10.131 ' 00:07:10.131 12:27:15 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:10.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.131 --rc genhtml_branch_coverage=1 00:07:10.131 --rc genhtml_function_coverage=1 00:07:10.131 --rc genhtml_legend=1 00:07:10.131 --rc geninfo_all_blocks=1 00:07:10.131 --rc geninfo_unexecuted_blocks=1 00:07:10.131 00:07:10.131 ' 00:07:10.131 12:27:15 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:10.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.131 --rc genhtml_branch_coverage=1 00:07:10.131 --rc genhtml_function_coverage=1 00:07:10.131 --rc genhtml_legend=1 00:07:10.131 --rc geninfo_all_blocks=1 00:07:10.131 --rc geninfo_unexecuted_blocks=1 00:07:10.131 00:07:10.131 ' 00:07:10.131 12:27:15 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:10.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.131 --rc genhtml_branch_coverage=1 00:07:10.131 --rc genhtml_function_coverage=1 00:07:10.131 --rc genhtml_legend=1 00:07:10.131 --rc geninfo_all_blocks=1 00:07:10.131 --rc geninfo_unexecuted_blocks=1 00:07:10.131 00:07:10.131 ' 00:07:10.131 12:27:15 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:10.131 12:27:15 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71585 00:07:10.132 12:27:15 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:10.132 12:27:15 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71585 00:07:10.132 12:27:15 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 71585 ']' 00:07:10.132 12:27:15 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.132 12:27:15 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.132 12:27:15 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.132 12:27:15 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.132 12:27:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:10.132 [2024-11-19 12:27:15.284665] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:10.132 [2024-11-19 12:27:15.284911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71585 ] 00:07:10.392 [2024-11-19 12:27:15.439643] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.392 [2024-11-19 12:27:15.492473] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.962 12:27:16 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.962 12:27:16 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:10.962 12:27:16 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:11.222 { 00:07:11.222 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:07:11.222 "fields": { 00:07:11.222 "major": 24, 00:07:11.222 "minor": 9, 00:07:11.222 "patch": 1, 00:07:11.222 "suffix": "-pre", 00:07:11.222 "commit": "b18e1bd62" 00:07:11.222 } 00:07:11.222 } 00:07:11.222 12:27:16 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:11.222 12:27:16 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:11.222 12:27:16 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:11.222 12:27:16 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:11.222 12:27:16 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:11.222 12:27:16 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.222 12:27:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:11.222 12:27:16 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:11.222 12:27:16 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:11.222 12:27:16 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.222 12:27:16 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:11.222 12:27:16 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:11.222 12:27:16 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:11.222 12:27:16 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:11.222 12:27:16 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:11.222 12:27:16 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:11.222 12:27:16 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.222 12:27:16 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:11.222 12:27:16 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.222 12:27:16 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:11.222 12:27:16 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.222 12:27:16 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:11.222 12:27:16 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:11.222 12:27:16 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:11.482 request: 00:07:11.482 { 00:07:11.482 "method": "env_dpdk_get_mem_stats", 00:07:11.482 "req_id": 1 00:07:11.482 } 00:07:11.482 Got JSON-RPC error response 00:07:11.482 response: 00:07:11.482 { 00:07:11.482 "code": -32601, 00:07:11.482 "message": "Method not found" 00:07:11.482 } 00:07:11.482 12:27:16 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:11.482 12:27:16 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:11.482 12:27:16 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:11.482 12:27:16 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:11.482 12:27:16 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71585 00:07:11.482 12:27:16 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 71585 ']' 00:07:11.482 12:27:16 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 71585 00:07:11.482 12:27:16 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:11.482 12:27:16 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:11.482 12:27:16 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71585 00:07:11.482 killing process with pid 71585 00:07:11.482 12:27:16 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:11.482 12:27:16 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:11.482 12:27:16 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71585' 00:07:11.483 12:27:16 app_cmdline -- common/autotest_common.sh@969 -- # kill 71585 00:07:11.483 12:27:16 app_cmdline -- common/autotest_common.sh@974 -- # wait 71585 00:07:12.053 00:07:12.053 real 0m2.043s 00:07:12.053 user 0m2.247s 00:07:12.053 sys 0m0.606s 00:07:12.053 12:27:17 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.053 12:27:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:12.053 ************************************ 00:07:12.053 END TEST app_cmdline 00:07:12.053 ************************************ 00:07:12.053 12:27:17 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:12.053 12:27:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:12.053 12:27:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.053 12:27:17 -- common/autotest_common.sh@10 -- # set +x 00:07:12.053 ************************************ 00:07:12.053 START TEST version 00:07:12.053 ************************************ 00:07:12.053 12:27:17 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:12.053 * Looking for test storage... 00:07:12.053 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:12.053 12:27:17 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:12.053 12:27:17 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:12.053 12:27:17 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:12.053 12:27:17 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:12.053 12:27:17 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.053 12:27:17 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.053 12:27:17 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.053 12:27:17 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.053 12:27:17 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.053 12:27:17 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.053 12:27:17 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.053 12:27:17 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.053 12:27:17 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.053 12:27:17 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.053 12:27:17 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.053 12:27:17 version -- scripts/common.sh@344 -- # case "$op" in 00:07:12.053 12:27:17 version -- scripts/common.sh@345 -- # : 1 00:07:12.053 12:27:17 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.053 12:27:17 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.053 12:27:17 version -- scripts/common.sh@365 -- # decimal 1 00:07:12.053 12:27:17 version -- scripts/common.sh@353 -- # local d=1 00:07:12.053 12:27:17 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.053 12:27:17 version -- scripts/common.sh@355 -- # echo 1 00:07:12.053 12:27:17 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.053 12:27:17 version -- scripts/common.sh@366 -- # decimal 2 00:07:12.053 12:27:17 version -- scripts/common.sh@353 -- # local d=2 00:07:12.053 12:27:17 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.053 12:27:17 version -- scripts/common.sh@355 -- # echo 2 00:07:12.053 12:27:17 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.053 12:27:17 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.053 12:27:17 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.053 12:27:17 version -- scripts/common.sh@368 -- # return 0 00:07:12.053 12:27:17 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.053 12:27:17 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:12.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.053 --rc genhtml_branch_coverage=1 00:07:12.053 --rc genhtml_function_coverage=1 00:07:12.053 --rc genhtml_legend=1 00:07:12.053 --rc geninfo_all_blocks=1 00:07:12.053 --rc geninfo_unexecuted_blocks=1 00:07:12.053 00:07:12.053 ' 00:07:12.053 12:27:17 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:12.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.053 --rc genhtml_branch_coverage=1 00:07:12.053 --rc genhtml_function_coverage=1 00:07:12.053 --rc genhtml_legend=1 00:07:12.053 --rc geninfo_all_blocks=1 00:07:12.053 --rc geninfo_unexecuted_blocks=1 00:07:12.053 00:07:12.053 ' 00:07:12.053 12:27:17 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:12.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.053 --rc genhtml_branch_coverage=1 00:07:12.053 --rc genhtml_function_coverage=1 00:07:12.053 --rc genhtml_legend=1 00:07:12.053 --rc geninfo_all_blocks=1 00:07:12.053 --rc geninfo_unexecuted_blocks=1 00:07:12.053 00:07:12.053 ' 00:07:12.053 12:27:17 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:12.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.053 --rc genhtml_branch_coverage=1 00:07:12.053 --rc genhtml_function_coverage=1 00:07:12.053 --rc genhtml_legend=1 00:07:12.053 --rc geninfo_all_blocks=1 00:07:12.053 --rc geninfo_unexecuted_blocks=1 00:07:12.053 00:07:12.053 ' 00:07:12.053 12:27:17 version -- app/version.sh@17 -- # get_header_version major 00:07:12.053 12:27:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:12.053 12:27:17 version -- app/version.sh@14 -- # cut -f2 00:07:12.053 12:27:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:12.053 12:27:17 version -- app/version.sh@17 -- # major=24 00:07:12.053 12:27:17 version -- app/version.sh@18 -- # get_header_version minor 00:07:12.053 12:27:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:12.053 12:27:17 version -- app/version.sh@14 -- # cut -f2 00:07:12.053 12:27:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:12.313 12:27:17 version -- app/version.sh@18 -- # minor=9 00:07:12.313 12:27:17 version -- app/version.sh@19 -- # get_header_version patch 00:07:12.313 12:27:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:12.313 12:27:17 version -- app/version.sh@14 -- # cut -f2 00:07:12.313 12:27:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:12.313 12:27:17 version -- app/version.sh@19 -- # patch=1 00:07:12.313 12:27:17 version -- app/version.sh@20 -- # get_header_version suffix 00:07:12.313 12:27:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:12.313 12:27:17 version -- app/version.sh@14 -- # cut -f2 00:07:12.313 12:27:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:12.313 12:27:17 version -- app/version.sh@20 -- # suffix=-pre 00:07:12.313 12:27:17 version -- app/version.sh@22 -- # version=24.9 00:07:12.313 12:27:17 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:12.313 12:27:17 version -- app/version.sh@25 -- # version=24.9.1 00:07:12.313 12:27:17 version -- app/version.sh@28 -- # version=24.9.1rc0 00:07:12.313 12:27:17 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:12.313 12:27:17 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:12.313 12:27:17 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:07:12.313 12:27:17 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:07:12.313 ************************************ 00:07:12.313 END TEST version 00:07:12.313 ************************************ 00:07:12.313 00:07:12.313 real 0m0.323s 00:07:12.313 user 0m0.192s 00:07:12.313 sys 0m0.189s 00:07:12.313 12:27:17 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.313 12:27:17 version -- common/autotest_common.sh@10 -- # set +x 00:07:12.313 12:27:17 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:12.313 12:27:17 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:12.313 12:27:17 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:12.313 12:27:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:12.313 12:27:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.313 12:27:17 -- common/autotest_common.sh@10 -- # set +x 00:07:12.313 ************************************ 00:07:12.313 START TEST bdev_raid 00:07:12.313 ************************************ 00:07:12.313 12:27:17 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:12.313 * Looking for test storage... 00:07:12.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:12.573 12:27:17 bdev_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:12.573 12:27:17 bdev_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:07:12.573 12:27:17 bdev_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:12.573 12:27:17 bdev_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:12.573 12:27:17 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.573 12:27:17 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.573 12:27:17 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.573 12:27:17 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.573 12:27:17 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.573 12:27:17 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.573 12:27:17 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.573 12:27:17 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.573 12:27:17 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.573 12:27:17 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.573 12:27:17 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.573 12:27:17 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:12.573 12:27:17 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:12.573 12:27:17 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.573 12:27:17 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.573 12:27:17 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:12.573 12:27:17 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:12.573 12:27:17 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.573 12:27:17 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:12.573 12:27:17 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.573 12:27:17 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:12.573 12:27:17 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:12.573 12:27:17 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.573 12:27:17 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:12.573 12:27:17 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.573 12:27:17 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.573 12:27:17 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.573 12:27:17 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:12.573 12:27:17 bdev_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.573 12:27:17 bdev_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:12.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.573 --rc genhtml_branch_coverage=1 00:07:12.573 --rc genhtml_function_coverage=1 00:07:12.573 --rc genhtml_legend=1 00:07:12.573 --rc geninfo_all_blocks=1 00:07:12.573 --rc geninfo_unexecuted_blocks=1 00:07:12.573 00:07:12.573 ' 00:07:12.573 12:27:17 bdev_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:12.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.573 --rc genhtml_branch_coverage=1 00:07:12.573 --rc genhtml_function_coverage=1 00:07:12.573 --rc genhtml_legend=1 00:07:12.573 --rc geninfo_all_blocks=1 00:07:12.573 --rc geninfo_unexecuted_blocks=1 00:07:12.573 00:07:12.573 ' 00:07:12.573 12:27:17 bdev_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:12.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.573 --rc genhtml_branch_coverage=1 00:07:12.573 --rc genhtml_function_coverage=1 00:07:12.573 --rc genhtml_legend=1 00:07:12.573 --rc geninfo_all_blocks=1 00:07:12.573 --rc geninfo_unexecuted_blocks=1 00:07:12.573 00:07:12.573 ' 00:07:12.573 12:27:17 bdev_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:12.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.573 --rc genhtml_branch_coverage=1 00:07:12.573 --rc genhtml_function_coverage=1 00:07:12.573 --rc genhtml_legend=1 00:07:12.573 --rc geninfo_all_blocks=1 00:07:12.573 --rc geninfo_unexecuted_blocks=1 00:07:12.573 00:07:12.573 ' 00:07:12.573 12:27:17 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:12.573 12:27:17 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:12.573 12:27:17 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:12.573 12:27:17 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:12.573 12:27:17 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:12.573 12:27:17 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:12.573 12:27:17 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:12.573 12:27:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:12.573 12:27:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.573 12:27:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:12.573 ************************************ 00:07:12.573 START TEST raid1_resize_data_offset_test 00:07:12.573 ************************************ 00:07:12.573 12:27:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:07:12.573 12:27:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=71750 00:07:12.573 12:27:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:12.573 12:27:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 71750' 00:07:12.573 Process raid pid: 71750 00:07:12.573 12:27:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 71750 00:07:12.573 12:27:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 71750 ']' 00:07:12.573 12:27:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.573 12:27:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.573 12:27:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.573 12:27:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.573 12:27:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.573 [2024-11-19 12:27:17.790378] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:12.573 [2024-11-19 12:27:17.790617] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.833 [2024-11-19 12:27:17.956042] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.833 [2024-11-19 12:27:18.003642] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.833 [2024-11-19 12:27:18.046548] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.833 [2024-11-19 12:27:18.046724] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.403 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:13.403 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:07:13.403 12:27:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:13.403 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.403 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.403 malloc0 00:07:13.403 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.404 12:27:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:13.404 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.404 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.664 malloc1 00:07:13.664 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.664 12:27:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:13.664 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.664 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.664 null0 00:07:13.664 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.664 12:27:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:13.664 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.664 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.664 [2024-11-19 12:27:18.684786] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:13.664 [2024-11-19 12:27:18.686691] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:13.664 [2024-11-19 12:27:18.686793] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:13.664 [2024-11-19 12:27:18.686967] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:13.664 [2024-11-19 12:27:18.687016] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:13.664 [2024-11-19 12:27:18.687317] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:07:13.664 [2024-11-19 12:27:18.687487] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:13.664 [2024-11-19 12:27:18.687535] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:07:13.664 [2024-11-19 12:27:18.687709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:13.664 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.664 12:27:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.664 12:27:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:13.664 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.664 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.664 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.664 12:27:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:13.664 12:27:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:13.664 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.664 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.664 [2024-11-19 12:27:18.744656] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:13.664 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.664 12:27:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:13.664 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.664 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.664 malloc2 00:07:13.664 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.664 12:27:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:13.664 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.664 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.664 [2024-11-19 12:27:18.869976] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:13.664 [2024-11-19 12:27:18.874293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:13.664 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.664 [2024-11-19 12:27:18.876304] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:13.664 12:27:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:13.664 12:27:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.664 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.664 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.664 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.924 12:27:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:13.924 12:27:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 71750 00:07:13.924 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 71750 ']' 00:07:13.924 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 71750 00:07:13.924 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:07:13.924 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:13.924 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71750 00:07:13.924 killing process with pid 71750 00:07:13.924 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:13.924 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:13.924 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71750' 00:07:13.924 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 71750 00:07:13.924 [2024-11-19 12:27:18.959732] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:13.924 12:27:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 71750 00:07:13.924 [2024-11-19 12:27:18.960432] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:13.924 [2024-11-19 12:27:18.960505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:13.924 [2024-11-19 12:27:18.960524] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:13.924 [2024-11-19 12:27:18.966200] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:13.924 [2024-11-19 12:27:18.966570] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:13.924 [2024-11-19 12:27:18.966603] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:07:13.924 [2024-11-19 12:27:19.176943] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:14.183 ************************************ 00:07:14.183 END TEST raid1_resize_data_offset_test 00:07:14.183 ************************************ 00:07:14.183 12:27:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:14.183 00:07:14.183 real 0m1.726s 00:07:14.183 user 0m1.709s 00:07:14.183 sys 0m0.455s 00:07:14.183 12:27:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.183 12:27:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.442 12:27:19 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:14.442 12:27:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:14.442 12:27:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.442 12:27:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:14.442 ************************************ 00:07:14.442 START TEST raid0_resize_superblock_test 00:07:14.442 ************************************ 00:07:14.442 12:27:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:07:14.442 12:27:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:14.442 12:27:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71805 00:07:14.442 12:27:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:14.442 Process raid pid: 71805 00:07:14.442 12:27:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71805' 00:07:14.442 12:27:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71805 00:07:14.442 12:27:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71805 ']' 00:07:14.442 12:27:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.442 12:27:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.442 12:27:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.442 12:27:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.442 12:27:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.442 [2024-11-19 12:27:19.579245] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:14.442 [2024-11-19 12:27:19.579459] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.702 [2024-11-19 12:27:19.743052] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.702 [2024-11-19 12:27:19.791032] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.702 [2024-11-19 12:27:19.833611] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.702 [2024-11-19 12:27:19.833653] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.278 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.278 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:15.278 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:15.278 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.278 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.278 malloc0 00:07:15.278 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.278 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:15.278 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.278 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.278 [2024-11-19 12:27:20.529140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:15.278 [2024-11-19 12:27:20.529212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:15.278 [2024-11-19 12:27:20.529236] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:15.278 [2024-11-19 12:27:20.529247] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:15.549 [2024-11-19 12:27:20.532119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:15.549 [2024-11-19 12:27:20.532165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:15.549 pt0 00:07:15.549 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.550 9391c8c7-fe8e-4602-938b-2b0ffbe1ca4d 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.550 7e794d0e-91c4-4976-8251-c171e091edb2 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.550 41980fd7-86a2-4cc1-9349-8813f72d9bd7 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.550 [2024-11-19 12:27:20.665887] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7e794d0e-91c4-4976-8251-c171e091edb2 is claimed 00:07:15.550 [2024-11-19 12:27:20.666038] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 41980fd7-86a2-4cc1-9349-8813f72d9bd7 is claimed 00:07:15.550 [2024-11-19 12:27:20.666148] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:15.550 [2024-11-19 12:27:20.666160] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:15.550 [2024-11-19 12:27:20.666417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:15.550 [2024-11-19 12:27:20.666570] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:15.550 [2024-11-19 12:27:20.666580] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:07:15.550 [2024-11-19 12:27:20.666739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.550 [2024-11-19 12:27:20.781915] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.550 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.809 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:15.809 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.810 [2024-11-19 12:27:20.829758] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:15.810 [2024-11-19 12:27:20.829825] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '7e794d0e-91c4-4976-8251-c171e091edb2' was resized: old size 131072, new size 204800 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.810 [2024-11-19 12:27:20.841660] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:15.810 [2024-11-19 12:27:20.841685] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '41980fd7-86a2-4cc1-9349-8813f72d9bd7' was resized: old size 131072, new size 204800 00:07:15.810 [2024-11-19 12:27:20.841713] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:15.810 [2024-11-19 12:27:20.953582] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.810 [2024-11-19 12:27:20.989373] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:15.810 [2024-11-19 12:27:20.989451] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:15.810 [2024-11-19 12:27:20.989465] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:15.810 [2024-11-19 12:27:20.989482] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:15.810 [2024-11-19 12:27:20.989606] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:15.810 [2024-11-19 12:27:20.989641] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:15.810 [2024-11-19 12:27:20.989654] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.810 12:27:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.810 [2024-11-19 12:27:21.001242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:15.810 [2024-11-19 12:27:21.001313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:15.810 [2024-11-19 12:27:21.001335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:15.810 [2024-11-19 12:27:21.001348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:15.810 [2024-11-19 12:27:21.003725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:15.810 [2024-11-19 12:27:21.003789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:15.810 pt0 00:07:15.810 [2024-11-19 12:27:21.005311] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 7e794d0e-91c4-4976-8251-c171e091edb2 00:07:15.810 [2024-11-19 12:27:21.005378] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7e794d0e-91c4-4976-8251-c171e091edb2 is claimed 00:07:15.810 [2024-11-19 12:27:21.005476] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 41980fd7-86a2-4cc1-9349-8813f72d9bd7 00:07:15.810 [2024-11-19 12:27:21.005495] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 41980fd7-86a2-4cc1-9349-8813f72d9bd7 is claimed 00:07:15.810 [2024-11-19 12:27:21.005618] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 41980fd7-86a2-4cc1-9349-8813f72d9bd7 (2) smaller than existing raid bdev Raid (3) 00:07:15.810 [2024-11-19 12:27:21.005639] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 7e794d0e-91c4-4976-8251-c171e091edb2: File exists 00:07:15.810 [2024-11-19 12:27:21.005674] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:07:15.810 [2024-11-19 12:27:21.005682] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:15.810 [2024-11-19 12:27:21.005951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:15.810 [2024-11-19 12:27:21.006079] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:07:15.810 [2024-11-19 12:27:21.006087] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006600 00:07:15.810 [2024-11-19 12:27:21.006207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:15.810 12:27:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.810 12:27:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:15.810 12:27:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.810 12:27:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.810 12:27:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.810 12:27:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:15.810 12:27:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:15.810 12:27:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.810 12:27:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.810 12:27:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:15.810 12:27:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:15.810 [2024-11-19 12:27:21.025584] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.810 12:27:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.071 12:27:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:16.071 12:27:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:16.071 12:27:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:16.071 12:27:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71805 00:07:16.071 12:27:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71805 ']' 00:07:16.071 12:27:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71805 00:07:16.071 12:27:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:16.071 12:27:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:16.071 12:27:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71805 00:07:16.071 killing process with pid 71805 00:07:16.071 12:27:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:16.071 12:27:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:16.071 12:27:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71805' 00:07:16.071 12:27:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71805 00:07:16.071 [2024-11-19 12:27:21.112778] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:16.071 [2024-11-19 12:27:21.112873] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:16.071 [2024-11-19 12:27:21.112921] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:16.071 [2024-11-19 12:27:21.112931] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Raid, state offline 00:07:16.071 12:27:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71805 00:07:16.071 [2024-11-19 12:27:21.272878] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:16.331 ************************************ 00:07:16.331 END TEST raid0_resize_superblock_test 00:07:16.331 ************************************ 00:07:16.331 12:27:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:16.331 00:07:16.331 real 0m2.027s 00:07:16.331 user 0m2.316s 00:07:16.331 sys 0m0.505s 00:07:16.331 12:27:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.331 12:27:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.331 12:27:21 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:16.331 12:27:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:16.331 12:27:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.331 12:27:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:16.590 ************************************ 00:07:16.590 START TEST raid1_resize_superblock_test 00:07:16.590 ************************************ 00:07:16.590 12:27:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:07:16.590 12:27:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:16.590 Process raid pid: 71876 00:07:16.590 12:27:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71876 00:07:16.590 12:27:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:16.590 12:27:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71876' 00:07:16.590 12:27:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71876 00:07:16.590 12:27:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71876 ']' 00:07:16.591 12:27:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.591 12:27:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.591 12:27:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.591 12:27:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.591 12:27:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.591 [2024-11-19 12:27:21.677641] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:16.591 [2024-11-19 12:27:21.677797] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.591 [2024-11-19 12:27:21.840380] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.850 [2024-11-19 12:27:21.887372] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.850 [2024-11-19 12:27:21.929703] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.850 [2024-11-19 12:27:21.929773] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:17.420 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.420 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:17.420 12:27:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:17.420 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.420 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.420 malloc0 00:07:17.420 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.420 12:27:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:17.421 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.421 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.421 [2024-11-19 12:27:22.637044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:17.421 [2024-11-19 12:27:22.637107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:17.421 [2024-11-19 12:27:22.637132] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:17.421 [2024-11-19 12:27:22.637143] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:17.421 [2024-11-19 12:27:22.639337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:17.421 [2024-11-19 12:27:22.639379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:17.421 pt0 00:07:17.421 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.421 12:27:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:17.421 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.421 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.681 8991d4aa-4361-4284-a550-ce74aad08127 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.681 46f3d53b-f1ac-4b34-ba31-9d5034c09f45 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.681 31734b90-2b76-4e87-bdeb-35e96458df04 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.681 [2024-11-19 12:27:22.774817] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 46f3d53b-f1ac-4b34-ba31-9d5034c09f45 is claimed 00:07:17.681 [2024-11-19 12:27:22.774893] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 31734b90-2b76-4e87-bdeb-35e96458df04 is claimed 00:07:17.681 [2024-11-19 12:27:22.775010] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:17.681 [2024-11-19 12:27:22.775026] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:17.681 [2024-11-19 12:27:22.775272] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:17.681 [2024-11-19 12:27:22.775416] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:17.681 [2024-11-19 12:27:22.775432] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:07:17.681 [2024-11-19 12:27:22.775557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:17.681 [2024-11-19 12:27:22.882903] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.681 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.681 [2024-11-19 12:27:22.934868] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:17.681 [2024-11-19 12:27:22.934899] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '46f3d53b-f1ac-4b34-ba31-9d5034c09f45' was resized: old size 131072, new size 204800 00:07:17.942 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.942 12:27:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:17.942 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.942 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.942 [2024-11-19 12:27:22.946685] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:17.942 [2024-11-19 12:27:22.946783] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '31734b90-2b76-4e87-bdeb-35e96458df04' was resized: old size 131072, new size 204800 00:07:17.942 [2024-11-19 12:27:22.946820] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:17.942 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.942 12:27:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:17.942 12:27:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:17.942 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.942 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.942 12:27:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.942 12:27:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:17.942 12:27:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:17.942 12:27:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:17.942 12:27:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.942 12:27:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.942 12:27:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.942 12:27:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:17.942 12:27:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:17.942 12:27:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:17.942 12:27:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:17.942 12:27:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:17.942 12:27:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.942 12:27:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.942 [2024-11-19 12:27:23.058534] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:17.943 12:27:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.943 12:27:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:17.943 12:27:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:17.943 12:27:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:17.943 12:27:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:17.943 12:27:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.943 12:27:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.943 [2024-11-19 12:27:23.086379] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:17.943 [2024-11-19 12:27:23.086467] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:17.943 [2024-11-19 12:27:23.086504] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:17.943 [2024-11-19 12:27:23.086696] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:17.943 [2024-11-19 12:27:23.086897] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:17.943 [2024-11-19 12:27:23.086951] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:17.943 [2024-11-19 12:27:23.086964] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:07:17.943 12:27:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.943 12:27:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:17.943 12:27:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.943 12:27:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.943 [2024-11-19 12:27:23.098237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:17.943 [2024-11-19 12:27:23.098328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:17.943 [2024-11-19 12:27:23.098355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:17.943 [2024-11-19 12:27:23.098370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:17.943 [2024-11-19 12:27:23.100654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:17.943 [2024-11-19 12:27:23.100792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:17.943 [2024-11-19 12:27:23.102399] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 46f3d53b-f1ac-4b34-ba31-9d5034c09f45 00:07:17.943 [2024-11-19 12:27:23.102452] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 46f3d53b-f1ac-4b34-ba31-9d5034c09f45 is claimed 00:07:17.943 [2024-11-19 12:27:23.102532] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 31734b90-2b76-4e87-bdeb-35e96458df04 00:07:17.943 [2024-11-19 12:27:23.102552] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 31734b90-2b76-4e87-bdeb-35e96458df04 is claimed 00:07:17.943 [2024-11-19 12:27:23.102702] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 31734b90-2b76-4e87-bdeb-35e96458df04 (2) smaller than existing raid bdev Raid (3) 00:07:17.943 [2024-11-19 12:27:23.102725] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 46f3d53b-f1ac-4b34-ba31-9d5034c09f45: File exists 00:07:17.943 [2024-11-19 12:27:23.102778] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:07:17.943 [2024-11-19 12:27:23.102788] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:17.943 [2024-11-19 12:27:23.103032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:17.943 pt0 00:07:17.943 [2024-11-19 12:27:23.103165] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:07:17.943 [2024-11-19 12:27:23.103180] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006600 00:07:17.943 [2024-11-19 12:27:23.103306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:17.943 12:27:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.943 12:27:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:17.943 12:27:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.943 12:27:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.943 12:27:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.943 12:27:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:17.943 12:27:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:17.943 12:27:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:17.943 12:27:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:17.943 12:27:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.943 12:27:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.943 [2024-11-19 12:27:23.126771] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:17.943 12:27:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.943 12:27:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:17.943 12:27:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:17.943 12:27:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:17.943 12:27:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71876 00:07:17.943 12:27:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71876 ']' 00:07:17.943 12:27:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71876 00:07:17.943 12:27:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:17.943 12:27:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:17.943 12:27:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71876 00:07:18.203 killing process with pid 71876 00:07:18.203 12:27:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:18.203 12:27:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:18.203 12:27:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71876' 00:07:18.203 12:27:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71876 00:07:18.203 [2024-11-19 12:27:23.208095] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:18.203 [2024-11-19 12:27:23.208204] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:18.203 12:27:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71876 00:07:18.203 [2024-11-19 12:27:23.208261] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:18.203 [2024-11-19 12:27:23.208271] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Raid, state offline 00:07:18.203 [2024-11-19 12:27:23.368167] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:18.463 12:27:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:18.463 00:07:18.463 real 0m2.023s 00:07:18.463 user 0m2.303s 00:07:18.463 sys 0m0.505s 00:07:18.463 12:27:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.463 12:27:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.463 ************************************ 00:07:18.463 END TEST raid1_resize_superblock_test 00:07:18.463 ************************************ 00:07:18.463 12:27:23 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:18.463 12:27:23 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:18.463 12:27:23 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:18.463 12:27:23 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:18.463 12:27:23 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:18.463 12:27:23 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:18.463 12:27:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:18.463 12:27:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.463 12:27:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:18.463 ************************************ 00:07:18.463 START TEST raid_function_test_raid0 00:07:18.463 ************************************ 00:07:18.463 12:27:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:07:18.463 12:27:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:18.463 12:27:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:18.463 12:27:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:18.463 Process raid pid: 71949 00:07:18.463 12:27:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=71949 00:07:18.463 12:27:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:18.463 12:27:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 71949' 00:07:18.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.463 12:27:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 71949 00:07:18.463 12:27:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 71949 ']' 00:07:18.463 12:27:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.463 12:27:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.463 12:27:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.463 12:27:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.463 12:27:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:18.724 [2024-11-19 12:27:23.790424] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:18.724 [2024-11-19 12:27:23.790551] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.724 [2024-11-19 12:27:23.953516] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.982 [2024-11-19 12:27:24.000783] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.982 [2024-11-19 12:27:24.042839] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.982 [2024-11-19 12:27:24.042880] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.552 12:27:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.552 12:27:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:07:19.552 12:27:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:19.552 12:27:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.552 12:27:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:19.552 Base_1 00:07:19.552 12:27:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.552 12:27:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:19.552 12:27:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.552 12:27:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:19.552 Base_2 00:07:19.552 12:27:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.552 12:27:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:19.552 12:27:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.552 12:27:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:19.552 [2024-11-19 12:27:24.663567] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:19.552 [2024-11-19 12:27:24.665716] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:19.552 [2024-11-19 12:27:24.665799] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:19.552 [2024-11-19 12:27:24.665814] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:19.552 [2024-11-19 12:27:24.666125] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:19.552 [2024-11-19 12:27:24.666269] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:19.552 [2024-11-19 12:27:24.666286] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000006280 00:07:19.552 [2024-11-19 12:27:24.666434] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:19.552 12:27:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.552 12:27:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:19.552 12:27:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:19.552 12:27:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.552 12:27:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:19.552 12:27:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.552 12:27:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:19.552 12:27:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:19.552 12:27:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:19.552 12:27:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:19.552 12:27:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:19.552 12:27:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:19.552 12:27:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:19.552 12:27:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:19.552 12:27:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:19.552 12:27:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:19.552 12:27:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:19.552 12:27:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:19.812 [2024-11-19 12:27:24.887155] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:19.812 /dev/nbd0 00:07:19.812 12:27:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:19.813 12:27:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:19.813 12:27:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:19.813 12:27:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:07:19.813 12:27:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:19.813 12:27:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:19.813 12:27:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:19.813 12:27:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:07:19.813 12:27:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:19.813 12:27:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:19.813 12:27:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.813 1+0 records in 00:07:19.813 1+0 records out 00:07:19.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392763 s, 10.4 MB/s 00:07:19.813 12:27:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.813 12:27:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:07:19.813 12:27:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.813 12:27:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:19.813 12:27:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:07:19.813 12:27:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.813 12:27:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:19.813 12:27:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:19.813 12:27:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:19.813 12:27:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:20.072 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:20.072 { 00:07:20.072 "nbd_device": "/dev/nbd0", 00:07:20.072 "bdev_name": "raid" 00:07:20.072 } 00:07:20.072 ]' 00:07:20.072 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:20.072 { 00:07:20.072 "nbd_device": "/dev/nbd0", 00:07:20.072 "bdev_name": "raid" 00:07:20.072 } 00:07:20.072 ]' 00:07:20.072 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:20.072 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:20.072 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:20.072 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:20.072 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:20.072 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:20.072 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:20.072 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:20.072 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:20.072 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:20.072 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:20.072 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:20.072 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:20.072 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:20.072 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:20.072 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:20.072 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:20.072 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:20.072 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:20.072 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:20.072 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:20.073 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:20.073 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:20.073 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:20.073 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:20.073 4096+0 records in 00:07:20.073 4096+0 records out 00:07:20.073 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0339735 s, 61.7 MB/s 00:07:20.073 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:20.332 4096+0 records in 00:07:20.332 4096+0 records out 00:07:20.332 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.179393 s, 11.7 MB/s 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:20.332 128+0 records in 00:07:20.332 128+0 records out 00:07:20.332 65536 bytes (66 kB, 64 KiB) copied, 0.00143228 s, 45.8 MB/s 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:20.332 2035+0 records in 00:07:20.332 2035+0 records out 00:07:20.332 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0144727 s, 72.0 MB/s 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:20.332 456+0 records in 00:07:20.332 456+0 records out 00:07:20.332 233472 bytes (233 kB, 228 KiB) copied, 0.00415397 s, 56.2 MB/s 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.332 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:20.593 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:20.593 [2024-11-19 12:27:25.787165] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.593 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:20.593 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:20.593 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.593 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.593 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:20.593 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:20.593 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.593 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:20.593 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:20.593 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:20.853 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:20.853 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:20.853 12:27:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:20.853 12:27:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:20.853 12:27:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:20.853 12:27:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:20.853 12:27:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:20.853 12:27:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:20.853 12:27:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:20.853 12:27:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:20.853 12:27:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:20.853 12:27:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 71949 00:07:20.853 12:27:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 71949 ']' 00:07:20.853 12:27:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 71949 00:07:20.853 12:27:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:07:20.853 12:27:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:20.853 12:27:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71949 00:07:20.853 killing process with pid 71949 00:07:20.853 12:27:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:20.853 12:27:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:20.853 12:27:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71949' 00:07:20.853 12:27:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 71949 00:07:20.853 [2024-11-19 12:27:26.087502] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:20.853 [2024-11-19 12:27:26.087627] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:20.853 12:27:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 71949 00:07:20.853 [2024-11-19 12:27:26.087679] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:20.853 [2024-11-19 12:27:26.087690] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid, state offline 00:07:20.853 [2024-11-19 12:27:26.110161] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:21.114 12:27:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:21.114 00:07:21.114 real 0m2.638s 00:07:21.114 user 0m3.229s 00:07:21.114 sys 0m0.928s 00:07:21.114 ************************************ 00:07:21.114 END TEST raid_function_test_raid0 00:07:21.114 ************************************ 00:07:21.114 12:27:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.114 12:27:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:21.374 12:27:26 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:21.374 12:27:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:21.374 12:27:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.374 12:27:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:21.374 ************************************ 00:07:21.374 START TEST raid_function_test_concat 00:07:21.374 ************************************ 00:07:21.374 12:27:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:07:21.374 12:27:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:21.374 12:27:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:21.374 12:27:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:21.374 12:27:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=72065 00:07:21.374 12:27:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:21.374 12:27:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 72065' 00:07:21.374 Process raid pid: 72065 00:07:21.374 12:27:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 72065 00:07:21.374 12:27:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 72065 ']' 00:07:21.374 12:27:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.374 12:27:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:21.374 12:27:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.374 12:27:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:21.374 12:27:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:21.374 [2024-11-19 12:27:26.500146] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:21.374 [2024-11-19 12:27:26.500359] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.635 [2024-11-19 12:27:26.662074] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.635 [2024-11-19 12:27:26.712421] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.635 [2024-11-19 12:27:26.754707] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.635 [2024-11-19 12:27:26.754742] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.203 12:27:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:22.203 12:27:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:07:22.203 12:27:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:22.203 12:27:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.203 12:27:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:22.203 Base_1 00:07:22.203 12:27:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.203 12:27:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:22.203 12:27:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.203 12:27:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:22.203 Base_2 00:07:22.203 12:27:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.203 12:27:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:22.203 12:27:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.203 12:27:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:22.203 [2024-11-19 12:27:27.364440] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:22.203 [2024-11-19 12:27:27.366689] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:22.203 [2024-11-19 12:27:27.366777] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:22.203 [2024-11-19 12:27:27.366790] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:22.203 [2024-11-19 12:27:27.367044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:22.203 [2024-11-19 12:27:27.367181] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:22.203 [2024-11-19 12:27:27.367191] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000006280 00:07:22.203 [2024-11-19 12:27:27.367329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:22.203 12:27:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.203 12:27:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:22.203 12:27:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:22.203 12:27:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.203 12:27:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:22.203 12:27:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.203 12:27:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:22.203 12:27:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:22.203 12:27:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:22.203 12:27:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:22.204 12:27:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:22.204 12:27:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:22.204 12:27:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:22.204 12:27:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:22.204 12:27:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:22.204 12:27:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:22.204 12:27:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:22.204 12:27:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:22.473 [2024-11-19 12:27:27.584064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:22.473 /dev/nbd0 00:07:22.473 12:27:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:22.473 12:27:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:22.473 12:27:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:22.473 12:27:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:07:22.473 12:27:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:22.473 12:27:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:22.473 12:27:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:22.473 12:27:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:07:22.473 12:27:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:22.473 12:27:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:22.473 12:27:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:22.473 1+0 records in 00:07:22.473 1+0 records out 00:07:22.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436479 s, 9.4 MB/s 00:07:22.473 12:27:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.473 12:27:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:07:22.473 12:27:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.473 12:27:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:22.473 12:27:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:07:22.473 12:27:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:22.473 12:27:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:22.473 12:27:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:22.473 12:27:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:22.473 12:27:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:22.748 12:27:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:22.748 { 00:07:22.748 "nbd_device": "/dev/nbd0", 00:07:22.749 "bdev_name": "raid" 00:07:22.749 } 00:07:22.749 ]' 00:07:22.749 12:27:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:22.749 { 00:07:22.749 "nbd_device": "/dev/nbd0", 00:07:22.749 "bdev_name": "raid" 00:07:22.749 } 00:07:22.749 ]' 00:07:22.749 12:27:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:22.749 12:27:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:22.749 12:27:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:22.749 12:27:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:22.749 12:27:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:22.749 12:27:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:22.749 12:27:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:22.749 12:27:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:22.749 12:27:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:22.749 12:27:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:22.749 12:27:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:22.749 12:27:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:22.749 12:27:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:22.749 12:27:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:22.749 12:27:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:22.749 12:27:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:22.749 12:27:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:22.749 12:27:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:22.749 12:27:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:22.749 12:27:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:22.749 12:27:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:22.749 12:27:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:22.749 12:27:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:22.749 12:27:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:22.749 12:27:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:22.749 4096+0 records in 00:07:22.749 4096+0 records out 00:07:22.749 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0271778 s, 77.2 MB/s 00:07:22.749 12:27:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:23.009 4096+0 records in 00:07:23.009 4096+0 records out 00:07:23.009 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.19344 s, 10.8 MB/s 00:07:23.009 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:23.009 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:23.009 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:23.009 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:23.009 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:23.009 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:23.009 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:23.009 128+0 records in 00:07:23.009 128+0 records out 00:07:23.009 65536 bytes (66 kB, 64 KiB) copied, 0.00116494 s, 56.3 MB/s 00:07:23.009 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:23.009 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:23.009 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:23.009 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:23.009 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:23.009 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:23.009 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:23.009 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:23.009 2035+0 records in 00:07:23.009 2035+0 records out 00:07:23.009 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0142328 s, 73.2 MB/s 00:07:23.010 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:23.010 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:23.010 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:23.010 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:23.010 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:23.010 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:23.010 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:23.010 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:23.010 456+0 records in 00:07:23.010 456+0 records out 00:07:23.010 233472 bytes (233 kB, 228 KiB) copied, 0.00358002 s, 65.2 MB/s 00:07:23.270 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:23.270 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:23.270 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:23.270 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:23.270 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:23.270 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:23.270 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:23.270 12:27:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:23.270 12:27:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:23.270 12:27:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:23.270 12:27:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:23.270 12:27:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.270 12:27:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:23.270 12:27:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:23.270 12:27:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:23.270 12:27:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:23.270 12:27:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.270 12:27:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.270 12:27:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:23.270 [2024-11-19 12:27:28.506176] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.270 12:27:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:23.270 12:27:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.270 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:23.270 12:27:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:23.270 12:27:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:23.530 12:27:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:23.530 12:27:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:23.530 12:27:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:23.530 12:27:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:23.790 12:27:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:23.790 12:27:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:23.790 12:27:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:23.790 12:27:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:23.790 12:27:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:23.790 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:23.790 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:23.790 12:27:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 72065 00:07:23.790 12:27:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 72065 ']' 00:07:23.790 12:27:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 72065 00:07:23.790 12:27:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:07:23.790 12:27:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:23.790 12:27:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72065 00:07:23.790 killing process with pid 72065 00:07:23.790 12:27:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:23.790 12:27:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:23.790 12:27:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72065' 00:07:23.790 12:27:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 72065 00:07:23.790 [2024-11-19 12:27:28.846434] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:23.790 12:27:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 72065 00:07:23.790 [2024-11-19 12:27:28.846560] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:23.790 [2024-11-19 12:27:28.846623] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:23.790 [2024-11-19 12:27:28.846636] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid, state offline 00:07:23.790 [2024-11-19 12:27:28.870015] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:24.049 12:27:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:24.049 00:07:24.049 real 0m2.701s 00:07:24.049 user 0m3.329s 00:07:24.049 sys 0m0.917s 00:07:24.049 12:27:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.049 12:27:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:24.049 ************************************ 00:07:24.049 END TEST raid_function_test_concat 00:07:24.049 ************************************ 00:07:24.049 12:27:29 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:24.049 12:27:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:24.049 12:27:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.049 12:27:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:24.049 ************************************ 00:07:24.049 START TEST raid0_resize_test 00:07:24.049 ************************************ 00:07:24.049 12:27:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:07:24.049 12:27:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:24.049 12:27:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:24.049 12:27:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:24.049 12:27:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:24.049 12:27:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:24.049 12:27:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:24.049 12:27:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:24.049 12:27:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:24.049 12:27:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=72175 00:07:24.049 Process raid pid: 72175 00:07:24.049 12:27:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 72175' 00:07:24.049 12:27:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:24.049 12:27:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 72175 00:07:24.049 12:27:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 72175 ']' 00:07:24.049 12:27:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.049 12:27:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.049 12:27:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.049 12:27:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.049 12:27:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.049 [2024-11-19 12:27:29.266345] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:24.049 [2024-11-19 12:27:29.266476] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:24.308 [2024-11-19 12:27:29.428876] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.308 [2024-11-19 12:27:29.475369] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.308 [2024-11-19 12:27:29.517729] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.308 [2024-11-19 12:27:29.517797] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.877 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.877 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:07:24.877 12:27:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:24.877 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.877 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.877 Base_1 00:07:24.877 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.877 12:27:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:24.877 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.877 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.877 Base_2 00:07:24.877 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.877 12:27:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:24.877 12:27:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:24.877 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.877 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.877 [2024-11-19 12:27:30.123302] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:24.877 [2024-11-19 12:27:30.125116] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:24.877 [2024-11-19 12:27:30.125187] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:24.877 [2024-11-19 12:27:30.125198] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:24.877 [2024-11-19 12:27:30.125442] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:07:24.877 [2024-11-19 12:27:30.125557] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:24.877 [2024-11-19 12:27:30.125578] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:07:24.877 [2024-11-19 12:27:30.125712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.877 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.878 12:27:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:24.878 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.878 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.878 [2024-11-19 12:27:30.135253] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:24.878 [2024-11-19 12:27:30.135281] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:25.138 true 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.138 [2024-11-19 12:27:30.151413] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.138 [2024-11-19 12:27:30.199151] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:25.138 [2024-11-19 12:27:30.199176] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:25.138 [2024-11-19 12:27:30.199203] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:25.138 true 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.138 [2024-11-19 12:27:30.215275] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 72175 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 72175 ']' 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 72175 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72175 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:25.138 killing process with pid 72175 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72175' 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 72175 00:07:25.138 [2024-11-19 12:27:30.294221] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:25.138 [2024-11-19 12:27:30.294316] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:25.138 [2024-11-19 12:27:30.294373] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:25.138 [2024-11-19 12:27:30.294382] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:07:25.138 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 72175 00:07:25.138 [2024-11-19 12:27:30.295919] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:25.401 12:27:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:25.401 00:07:25.401 real 0m1.352s 00:07:25.401 user 0m1.502s 00:07:25.401 sys 0m0.324s 00:07:25.401 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.401 12:27:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.401 ************************************ 00:07:25.401 END TEST raid0_resize_test 00:07:25.401 ************************************ 00:07:25.401 12:27:30 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:25.401 12:27:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:25.401 12:27:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.401 12:27:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:25.401 ************************************ 00:07:25.401 START TEST raid1_resize_test 00:07:25.401 ************************************ 00:07:25.401 12:27:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:07:25.401 12:27:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:25.401 12:27:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:25.401 12:27:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:25.401 12:27:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:25.401 12:27:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:25.401 12:27:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:25.401 12:27:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:25.401 12:27:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:25.401 12:27:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=72226 00:07:25.401 12:27:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:25.401 12:27:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 72226' 00:07:25.401 Process raid pid: 72226 00:07:25.401 12:27:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 72226 00:07:25.401 12:27:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 72226 ']' 00:07:25.401 12:27:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.401 12:27:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.401 12:27:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.401 12:27:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.401 12:27:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.661 [2024-11-19 12:27:30.687586] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:25.661 [2024-11-19 12:27:30.687713] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.661 [2024-11-19 12:27:30.845926] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.661 [2024-11-19 12:27:30.893130] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.921 [2024-11-19 12:27:30.935509] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:25.921 [2024-11-19 12:27:30.935548] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.492 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.492 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:07:26.492 12:27:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:26.492 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.492 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.492 Base_1 00:07:26.492 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.492 12:27:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.493 Base_2 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.493 [2024-11-19 12:27:31.548852] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:26.493 [2024-11-19 12:27:31.550714] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:26.493 [2024-11-19 12:27:31.550794] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:26.493 [2024-11-19 12:27:31.550808] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:26.493 [2024-11-19 12:27:31.551069] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:07:26.493 [2024-11-19 12:27:31.551197] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:26.493 [2024-11-19 12:27:31.551214] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:07:26.493 [2024-11-19 12:27:31.551341] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.493 [2024-11-19 12:27:31.560804] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:26.493 [2024-11-19 12:27:31.560825] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:26.493 true 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.493 [2024-11-19 12:27:31.576947] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.493 [2024-11-19 12:27:31.620693] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:26.493 [2024-11-19 12:27:31.620720] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:26.493 [2024-11-19 12:27:31.620755] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:26.493 true 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:26.493 [2024-11-19 12:27:31.632841] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 72226 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 72226 ']' 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 72226 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72226 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:26.493 killing process with pid 72226 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72226' 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 72226 00:07:26.493 [2024-11-19 12:27:31.717059] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:26.493 [2024-11-19 12:27:31.717204] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:26.493 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 72226 00:07:26.493 [2024-11-19 12:27:31.717640] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:26.493 [2024-11-19 12:27:31.717661] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:07:26.493 [2024-11-19 12:27:31.718846] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:26.753 12:27:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:26.753 00:07:26.753 real 0m1.359s 00:07:26.753 user 0m1.514s 00:07:26.753 sys 0m0.315s 00:07:26.753 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.753 12:27:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.753 ************************************ 00:07:26.753 END TEST raid1_resize_test 00:07:26.753 ************************************ 00:07:27.013 12:27:32 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:27.013 12:27:32 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:27.013 12:27:32 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:27.013 12:27:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:27.013 12:27:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.013 12:27:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:27.013 ************************************ 00:07:27.013 START TEST raid_state_function_test 00:07:27.013 ************************************ 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72272 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72272' 00:07:27.013 Process raid pid: 72272 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72272 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 72272 ']' 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.013 12:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.013 [2024-11-19 12:27:32.127853] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:27.013 [2024-11-19 12:27:32.127981] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.013 [2024-11-19 12:27:32.270957] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.273 [2024-11-19 12:27:32.320114] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.273 [2024-11-19 12:27:32.362399] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.273 [2024-11-19 12:27:32.362440] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.842 12:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.842 12:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:27.842 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:27.842 12:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.842 12:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.842 [2024-11-19 12:27:32.960007] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:27.842 [2024-11-19 12:27:32.960078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:27.842 [2024-11-19 12:27:32.960090] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:27.842 [2024-11-19 12:27:32.960100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:27.842 12:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.842 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:27.842 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:27.842 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:27.842 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:27.842 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.842 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.842 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.842 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.842 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.842 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.842 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.842 12:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.842 12:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.842 12:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.842 12:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.842 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.842 "name": "Existed_Raid", 00:07:27.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.842 "strip_size_kb": 64, 00:07:27.842 "state": "configuring", 00:07:27.842 "raid_level": "raid0", 00:07:27.842 "superblock": false, 00:07:27.842 "num_base_bdevs": 2, 00:07:27.842 "num_base_bdevs_discovered": 0, 00:07:27.842 "num_base_bdevs_operational": 2, 00:07:27.842 "base_bdevs_list": [ 00:07:27.842 { 00:07:27.842 "name": "BaseBdev1", 00:07:27.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.842 "is_configured": false, 00:07:27.842 "data_offset": 0, 00:07:27.842 "data_size": 0 00:07:27.842 }, 00:07:27.842 { 00:07:27.842 "name": "BaseBdev2", 00:07:27.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.842 "is_configured": false, 00:07:27.842 "data_offset": 0, 00:07:27.842 "data_size": 0 00:07:27.842 } 00:07:27.842 ] 00:07:27.842 }' 00:07:27.842 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.842 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.412 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:28.412 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.412 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.412 [2024-11-19 12:27:33.395236] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:28.412 [2024-11-19 12:27:33.395290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:28.412 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.412 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:28.412 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.412 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.412 [2024-11-19 12:27:33.407231] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:28.412 [2024-11-19 12:27:33.407274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:28.412 [2024-11-19 12:27:33.407283] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:28.412 [2024-11-19 12:27:33.407292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:28.412 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.412 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:28.412 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.412 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.412 [2024-11-19 12:27:33.428110] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:28.412 BaseBdev1 00:07:28.412 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.412 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:28.412 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:28.412 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:28.412 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:28.412 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:28.412 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:28.412 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:28.412 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.412 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.412 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.412 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:28.412 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.412 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.412 [ 00:07:28.412 { 00:07:28.412 "name": "BaseBdev1", 00:07:28.412 "aliases": [ 00:07:28.412 "2fe2bd70-c32f-4f35-b091-7256909750d4" 00:07:28.412 ], 00:07:28.412 "product_name": "Malloc disk", 00:07:28.412 "block_size": 512, 00:07:28.412 "num_blocks": 65536, 00:07:28.412 "uuid": "2fe2bd70-c32f-4f35-b091-7256909750d4", 00:07:28.412 "assigned_rate_limits": { 00:07:28.412 "rw_ios_per_sec": 0, 00:07:28.412 "rw_mbytes_per_sec": 0, 00:07:28.412 "r_mbytes_per_sec": 0, 00:07:28.412 "w_mbytes_per_sec": 0 00:07:28.412 }, 00:07:28.412 "claimed": true, 00:07:28.412 "claim_type": "exclusive_write", 00:07:28.412 "zoned": false, 00:07:28.412 "supported_io_types": { 00:07:28.412 "read": true, 00:07:28.412 "write": true, 00:07:28.412 "unmap": true, 00:07:28.412 "flush": true, 00:07:28.412 "reset": true, 00:07:28.412 "nvme_admin": false, 00:07:28.412 "nvme_io": false, 00:07:28.412 "nvme_io_md": false, 00:07:28.412 "write_zeroes": true, 00:07:28.412 "zcopy": true, 00:07:28.412 "get_zone_info": false, 00:07:28.412 "zone_management": false, 00:07:28.412 "zone_append": false, 00:07:28.412 "compare": false, 00:07:28.412 "compare_and_write": false, 00:07:28.412 "abort": true, 00:07:28.412 "seek_hole": false, 00:07:28.412 "seek_data": false, 00:07:28.412 "copy": true, 00:07:28.412 "nvme_iov_md": false 00:07:28.412 }, 00:07:28.412 "memory_domains": [ 00:07:28.412 { 00:07:28.412 "dma_device_id": "system", 00:07:28.412 "dma_device_type": 1 00:07:28.412 }, 00:07:28.412 { 00:07:28.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.412 "dma_device_type": 2 00:07:28.412 } 00:07:28.412 ], 00:07:28.412 "driver_specific": {} 00:07:28.412 } 00:07:28.412 ] 00:07:28.412 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.412 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:28.412 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:28.412 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.413 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:28.413 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:28.413 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.413 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.413 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.413 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.413 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.413 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.413 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.413 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.413 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.413 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.413 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.413 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.413 "name": "Existed_Raid", 00:07:28.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.413 "strip_size_kb": 64, 00:07:28.413 "state": "configuring", 00:07:28.413 "raid_level": "raid0", 00:07:28.413 "superblock": false, 00:07:28.413 "num_base_bdevs": 2, 00:07:28.413 "num_base_bdevs_discovered": 1, 00:07:28.413 "num_base_bdevs_operational": 2, 00:07:28.413 "base_bdevs_list": [ 00:07:28.413 { 00:07:28.413 "name": "BaseBdev1", 00:07:28.413 "uuid": "2fe2bd70-c32f-4f35-b091-7256909750d4", 00:07:28.413 "is_configured": true, 00:07:28.413 "data_offset": 0, 00:07:28.413 "data_size": 65536 00:07:28.413 }, 00:07:28.413 { 00:07:28.413 "name": "BaseBdev2", 00:07:28.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.413 "is_configured": false, 00:07:28.413 "data_offset": 0, 00:07:28.413 "data_size": 0 00:07:28.413 } 00:07:28.413 ] 00:07:28.413 }' 00:07:28.413 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.413 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.982 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:28.982 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.982 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.982 [2024-11-19 12:27:33.935374] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:28.982 [2024-11-19 12:27:33.935440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:28.982 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.982 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:28.982 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.982 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.982 [2024-11-19 12:27:33.947393] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:28.982 [2024-11-19 12:27:33.949290] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:28.982 [2024-11-19 12:27:33.949355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:28.982 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.982 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:28.982 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:28.982 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:28.982 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.982 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:28.982 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:28.982 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.982 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.982 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.982 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.982 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.982 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.982 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.982 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.982 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.982 12:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.982 12:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.982 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.982 "name": "Existed_Raid", 00:07:28.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.982 "strip_size_kb": 64, 00:07:28.982 "state": "configuring", 00:07:28.982 "raid_level": "raid0", 00:07:28.982 "superblock": false, 00:07:28.982 "num_base_bdevs": 2, 00:07:28.982 "num_base_bdevs_discovered": 1, 00:07:28.982 "num_base_bdevs_operational": 2, 00:07:28.982 "base_bdevs_list": [ 00:07:28.982 { 00:07:28.982 "name": "BaseBdev1", 00:07:28.982 "uuid": "2fe2bd70-c32f-4f35-b091-7256909750d4", 00:07:28.982 "is_configured": true, 00:07:28.982 "data_offset": 0, 00:07:28.982 "data_size": 65536 00:07:28.982 }, 00:07:28.982 { 00:07:28.982 "name": "BaseBdev2", 00:07:28.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.982 "is_configured": false, 00:07:28.982 "data_offset": 0, 00:07:28.982 "data_size": 0 00:07:28.982 } 00:07:28.982 ] 00:07:28.982 }' 00:07:28.982 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.983 12:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.243 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:29.243 12:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.243 12:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.243 [2024-11-19 12:27:34.350418] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:29.243 [2024-11-19 12:27:34.350544] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:29.243 [2024-11-19 12:27:34.350578] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:29.243 [2024-11-19 12:27:34.351568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:29.243 [2024-11-19 12:27:34.352118] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:29.243 [2024-11-19 12:27:34.352234] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:29.243 [2024-11-19 12:27:34.352915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.243 BaseBdev2 00:07:29.243 12:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.243 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:29.243 12:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:29.243 12:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:29.243 12:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:29.243 12:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:29.243 12:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:29.243 12:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:29.243 12:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.243 12:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.243 12:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.243 12:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:29.243 12:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.243 12:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.243 [ 00:07:29.243 { 00:07:29.243 "name": "BaseBdev2", 00:07:29.243 "aliases": [ 00:07:29.243 "5424eee7-69a1-4b24-998a-cbd9d5a44ce9" 00:07:29.243 ], 00:07:29.243 "product_name": "Malloc disk", 00:07:29.243 "block_size": 512, 00:07:29.243 "num_blocks": 65536, 00:07:29.243 "uuid": "5424eee7-69a1-4b24-998a-cbd9d5a44ce9", 00:07:29.243 "assigned_rate_limits": { 00:07:29.243 "rw_ios_per_sec": 0, 00:07:29.243 "rw_mbytes_per_sec": 0, 00:07:29.243 "r_mbytes_per_sec": 0, 00:07:29.243 "w_mbytes_per_sec": 0 00:07:29.243 }, 00:07:29.243 "claimed": true, 00:07:29.243 "claim_type": "exclusive_write", 00:07:29.243 "zoned": false, 00:07:29.243 "supported_io_types": { 00:07:29.243 "read": true, 00:07:29.243 "write": true, 00:07:29.243 "unmap": true, 00:07:29.243 "flush": true, 00:07:29.243 "reset": true, 00:07:29.243 "nvme_admin": false, 00:07:29.243 "nvme_io": false, 00:07:29.243 "nvme_io_md": false, 00:07:29.243 "write_zeroes": true, 00:07:29.243 "zcopy": true, 00:07:29.243 "get_zone_info": false, 00:07:29.243 "zone_management": false, 00:07:29.243 "zone_append": false, 00:07:29.243 "compare": false, 00:07:29.243 "compare_and_write": false, 00:07:29.243 "abort": true, 00:07:29.243 "seek_hole": false, 00:07:29.243 "seek_data": false, 00:07:29.243 "copy": true, 00:07:29.243 "nvme_iov_md": false 00:07:29.243 }, 00:07:29.243 "memory_domains": [ 00:07:29.243 { 00:07:29.243 "dma_device_id": "system", 00:07:29.243 "dma_device_type": 1 00:07:29.243 }, 00:07:29.243 { 00:07:29.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.243 "dma_device_type": 2 00:07:29.243 } 00:07:29.243 ], 00:07:29.243 "driver_specific": {} 00:07:29.243 } 00:07:29.243 ] 00:07:29.243 12:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.243 12:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:29.243 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:29.243 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:29.243 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:29.243 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.243 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.243 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:29.243 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.243 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.243 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.244 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.244 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.244 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.244 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.244 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.244 12:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.244 12:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.244 12:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.244 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.244 "name": "Existed_Raid", 00:07:29.244 "uuid": "dd72f85f-9b85-43de-a207-017dc7ce2743", 00:07:29.244 "strip_size_kb": 64, 00:07:29.244 "state": "online", 00:07:29.244 "raid_level": "raid0", 00:07:29.244 "superblock": false, 00:07:29.244 "num_base_bdevs": 2, 00:07:29.244 "num_base_bdevs_discovered": 2, 00:07:29.244 "num_base_bdevs_operational": 2, 00:07:29.244 "base_bdevs_list": [ 00:07:29.244 { 00:07:29.244 "name": "BaseBdev1", 00:07:29.244 "uuid": "2fe2bd70-c32f-4f35-b091-7256909750d4", 00:07:29.244 "is_configured": true, 00:07:29.244 "data_offset": 0, 00:07:29.244 "data_size": 65536 00:07:29.244 }, 00:07:29.244 { 00:07:29.244 "name": "BaseBdev2", 00:07:29.244 "uuid": "5424eee7-69a1-4b24-998a-cbd9d5a44ce9", 00:07:29.244 "is_configured": true, 00:07:29.244 "data_offset": 0, 00:07:29.244 "data_size": 65536 00:07:29.244 } 00:07:29.244 ] 00:07:29.244 }' 00:07:29.244 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.244 12:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.814 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:29.814 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:29.814 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:29.814 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:29.814 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:29.814 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:29.814 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:29.814 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:29.814 12:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.814 12:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.814 [2024-11-19 12:27:34.837892] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:29.814 12:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.814 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:29.814 "name": "Existed_Raid", 00:07:29.814 "aliases": [ 00:07:29.814 "dd72f85f-9b85-43de-a207-017dc7ce2743" 00:07:29.814 ], 00:07:29.814 "product_name": "Raid Volume", 00:07:29.814 "block_size": 512, 00:07:29.814 "num_blocks": 131072, 00:07:29.814 "uuid": "dd72f85f-9b85-43de-a207-017dc7ce2743", 00:07:29.814 "assigned_rate_limits": { 00:07:29.814 "rw_ios_per_sec": 0, 00:07:29.814 "rw_mbytes_per_sec": 0, 00:07:29.814 "r_mbytes_per_sec": 0, 00:07:29.814 "w_mbytes_per_sec": 0 00:07:29.814 }, 00:07:29.814 "claimed": false, 00:07:29.814 "zoned": false, 00:07:29.814 "supported_io_types": { 00:07:29.814 "read": true, 00:07:29.814 "write": true, 00:07:29.814 "unmap": true, 00:07:29.814 "flush": true, 00:07:29.814 "reset": true, 00:07:29.814 "nvme_admin": false, 00:07:29.814 "nvme_io": false, 00:07:29.814 "nvme_io_md": false, 00:07:29.814 "write_zeroes": true, 00:07:29.814 "zcopy": false, 00:07:29.814 "get_zone_info": false, 00:07:29.814 "zone_management": false, 00:07:29.814 "zone_append": false, 00:07:29.814 "compare": false, 00:07:29.814 "compare_and_write": false, 00:07:29.814 "abort": false, 00:07:29.814 "seek_hole": false, 00:07:29.814 "seek_data": false, 00:07:29.814 "copy": false, 00:07:29.814 "nvme_iov_md": false 00:07:29.814 }, 00:07:29.814 "memory_domains": [ 00:07:29.814 { 00:07:29.814 "dma_device_id": "system", 00:07:29.814 "dma_device_type": 1 00:07:29.814 }, 00:07:29.814 { 00:07:29.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.814 "dma_device_type": 2 00:07:29.814 }, 00:07:29.814 { 00:07:29.814 "dma_device_id": "system", 00:07:29.814 "dma_device_type": 1 00:07:29.814 }, 00:07:29.814 { 00:07:29.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.814 "dma_device_type": 2 00:07:29.814 } 00:07:29.814 ], 00:07:29.814 "driver_specific": { 00:07:29.814 "raid": { 00:07:29.814 "uuid": "dd72f85f-9b85-43de-a207-017dc7ce2743", 00:07:29.814 "strip_size_kb": 64, 00:07:29.814 "state": "online", 00:07:29.814 "raid_level": "raid0", 00:07:29.814 "superblock": false, 00:07:29.814 "num_base_bdevs": 2, 00:07:29.814 "num_base_bdevs_discovered": 2, 00:07:29.814 "num_base_bdevs_operational": 2, 00:07:29.814 "base_bdevs_list": [ 00:07:29.814 { 00:07:29.814 "name": "BaseBdev1", 00:07:29.814 "uuid": "2fe2bd70-c32f-4f35-b091-7256909750d4", 00:07:29.814 "is_configured": true, 00:07:29.814 "data_offset": 0, 00:07:29.814 "data_size": 65536 00:07:29.814 }, 00:07:29.814 { 00:07:29.814 "name": "BaseBdev2", 00:07:29.814 "uuid": "5424eee7-69a1-4b24-998a-cbd9d5a44ce9", 00:07:29.814 "is_configured": true, 00:07:29.814 "data_offset": 0, 00:07:29.814 "data_size": 65536 00:07:29.814 } 00:07:29.814 ] 00:07:29.814 } 00:07:29.814 } 00:07:29.814 }' 00:07:29.814 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:29.814 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:29.814 BaseBdev2' 00:07:29.814 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.814 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:29.814 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:29.814 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:29.814 12:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.815 12:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.815 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.815 12:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.815 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:29.815 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:29.815 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:29.815 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.815 12:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:29.815 12:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.815 12:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.815 12:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.815 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:29.815 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:29.815 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:29.815 12:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.815 12:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.815 [2024-11-19 12:27:35.017271] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:29.815 [2024-11-19 12:27:35.017308] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:29.815 [2024-11-19 12:27:35.017366] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:29.815 12:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.815 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:29.815 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:29.815 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:29.815 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:29.815 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:29.815 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:29.815 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.815 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:29.815 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:29.815 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.815 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:29.815 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.815 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.815 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.815 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.815 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.815 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.815 12:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.815 12:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.815 12:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.074 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.074 "name": "Existed_Raid", 00:07:30.074 "uuid": "dd72f85f-9b85-43de-a207-017dc7ce2743", 00:07:30.074 "strip_size_kb": 64, 00:07:30.074 "state": "offline", 00:07:30.074 "raid_level": "raid0", 00:07:30.074 "superblock": false, 00:07:30.074 "num_base_bdevs": 2, 00:07:30.074 "num_base_bdevs_discovered": 1, 00:07:30.074 "num_base_bdevs_operational": 1, 00:07:30.074 "base_bdevs_list": [ 00:07:30.074 { 00:07:30.074 "name": null, 00:07:30.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.074 "is_configured": false, 00:07:30.074 "data_offset": 0, 00:07:30.074 "data_size": 65536 00:07:30.074 }, 00:07:30.074 { 00:07:30.074 "name": "BaseBdev2", 00:07:30.074 "uuid": "5424eee7-69a1-4b24-998a-cbd9d5a44ce9", 00:07:30.074 "is_configured": true, 00:07:30.074 "data_offset": 0, 00:07:30.074 "data_size": 65536 00:07:30.074 } 00:07:30.074 ] 00:07:30.074 }' 00:07:30.074 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.074 12:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.334 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:30.334 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:30.334 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.334 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:30.334 12:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.334 12:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.334 12:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.334 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:30.334 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:30.334 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:30.334 12:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.334 12:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.334 [2024-11-19 12:27:35.523732] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:30.334 [2024-11-19 12:27:35.523815] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:30.334 12:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.334 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:30.334 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:30.334 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.334 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:30.334 12:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.334 12:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.334 12:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.334 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:30.334 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:30.334 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:30.335 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72272 00:07:30.335 12:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 72272 ']' 00:07:30.335 12:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 72272 00:07:30.335 12:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:30.335 12:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:30.595 12:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72272 00:07:30.595 12:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:30.595 12:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:30.595 killing process with pid 72272 00:07:30.595 12:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72272' 00:07:30.595 12:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 72272 00:07:30.595 [2024-11-19 12:27:35.628187] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:30.595 12:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 72272 00:07:30.595 [2024-11-19 12:27:35.629237] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:30.886 00:07:30.886 real 0m3.842s 00:07:30.886 user 0m6.000s 00:07:30.886 sys 0m0.785s 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.886 ************************************ 00:07:30.886 END TEST raid_state_function_test 00:07:30.886 ************************************ 00:07:30.886 12:27:35 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:30.886 12:27:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:30.886 12:27:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.886 12:27:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:30.886 ************************************ 00:07:30.886 START TEST raid_state_function_test_sb 00:07:30.886 ************************************ 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72514 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:30.886 Process raid pid: 72514 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72514' 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72514 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 72514 ']' 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.886 12:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.886 [2024-11-19 12:27:36.037670] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:30.886 [2024-11-19 12:27:36.037810] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.150 [2024-11-19 12:27:36.198930] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.150 [2024-11-19 12:27:36.253235] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.150 [2024-11-19 12:27:36.296148] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.150 [2024-11-19 12:27:36.296211] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.719 12:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.719 12:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:31.719 12:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:31.719 12:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.719 12:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.719 [2024-11-19 12:27:36.873979] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:31.719 [2024-11-19 12:27:36.874062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:31.719 [2024-11-19 12:27:36.874074] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:31.719 [2024-11-19 12:27:36.874083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:31.719 12:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.719 12:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:31.719 12:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:31.719 12:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:31.719 12:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:31.719 12:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.719 12:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.719 12:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.719 12:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.719 12:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.719 12:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.719 12:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.719 12:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.719 12:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:31.719 12:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.719 12:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.719 12:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.719 "name": "Existed_Raid", 00:07:31.719 "uuid": "50f77335-46b8-4c52-820a-68e93fd00400", 00:07:31.719 "strip_size_kb": 64, 00:07:31.719 "state": "configuring", 00:07:31.719 "raid_level": "raid0", 00:07:31.719 "superblock": true, 00:07:31.719 "num_base_bdevs": 2, 00:07:31.719 "num_base_bdevs_discovered": 0, 00:07:31.719 "num_base_bdevs_operational": 2, 00:07:31.719 "base_bdevs_list": [ 00:07:31.719 { 00:07:31.719 "name": "BaseBdev1", 00:07:31.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.719 "is_configured": false, 00:07:31.719 "data_offset": 0, 00:07:31.719 "data_size": 0 00:07:31.719 }, 00:07:31.719 { 00:07:31.719 "name": "BaseBdev2", 00:07:31.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.719 "is_configured": false, 00:07:31.719 "data_offset": 0, 00:07:31.719 "data_size": 0 00:07:31.719 } 00:07:31.719 ] 00:07:31.719 }' 00:07:31.719 12:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.719 12:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.290 [2024-11-19 12:27:37.313160] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:32.290 [2024-11-19 12:27:37.313224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.290 [2024-11-19 12:27:37.325186] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:32.290 [2024-11-19 12:27:37.325243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:32.290 [2024-11-19 12:27:37.325252] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:32.290 [2024-11-19 12:27:37.325261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.290 [2024-11-19 12:27:37.346206] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:32.290 BaseBdev1 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.290 [ 00:07:32.290 { 00:07:32.290 "name": "BaseBdev1", 00:07:32.290 "aliases": [ 00:07:32.290 "0cfd6149-3839-4dca-9244-2363f5d37380" 00:07:32.290 ], 00:07:32.290 "product_name": "Malloc disk", 00:07:32.290 "block_size": 512, 00:07:32.290 "num_blocks": 65536, 00:07:32.290 "uuid": "0cfd6149-3839-4dca-9244-2363f5d37380", 00:07:32.290 "assigned_rate_limits": { 00:07:32.290 "rw_ios_per_sec": 0, 00:07:32.290 "rw_mbytes_per_sec": 0, 00:07:32.290 "r_mbytes_per_sec": 0, 00:07:32.290 "w_mbytes_per_sec": 0 00:07:32.290 }, 00:07:32.290 "claimed": true, 00:07:32.290 "claim_type": "exclusive_write", 00:07:32.290 "zoned": false, 00:07:32.290 "supported_io_types": { 00:07:32.290 "read": true, 00:07:32.290 "write": true, 00:07:32.290 "unmap": true, 00:07:32.290 "flush": true, 00:07:32.290 "reset": true, 00:07:32.290 "nvme_admin": false, 00:07:32.290 "nvme_io": false, 00:07:32.290 "nvme_io_md": false, 00:07:32.290 "write_zeroes": true, 00:07:32.290 "zcopy": true, 00:07:32.290 "get_zone_info": false, 00:07:32.290 "zone_management": false, 00:07:32.290 "zone_append": false, 00:07:32.290 "compare": false, 00:07:32.290 "compare_and_write": false, 00:07:32.290 "abort": true, 00:07:32.290 "seek_hole": false, 00:07:32.290 "seek_data": false, 00:07:32.290 "copy": true, 00:07:32.290 "nvme_iov_md": false 00:07:32.290 }, 00:07:32.290 "memory_domains": [ 00:07:32.290 { 00:07:32.290 "dma_device_id": "system", 00:07:32.290 "dma_device_type": 1 00:07:32.290 }, 00:07:32.290 { 00:07:32.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.290 "dma_device_type": 2 00:07:32.290 } 00:07:32.290 ], 00:07:32.290 "driver_specific": {} 00:07:32.290 } 00:07:32.290 ] 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.290 "name": "Existed_Raid", 00:07:32.290 "uuid": "7908dda3-662e-42b8-b196-19c3c9ac818e", 00:07:32.290 "strip_size_kb": 64, 00:07:32.290 "state": "configuring", 00:07:32.290 "raid_level": "raid0", 00:07:32.290 "superblock": true, 00:07:32.290 "num_base_bdevs": 2, 00:07:32.290 "num_base_bdevs_discovered": 1, 00:07:32.290 "num_base_bdevs_operational": 2, 00:07:32.290 "base_bdevs_list": [ 00:07:32.290 { 00:07:32.290 "name": "BaseBdev1", 00:07:32.290 "uuid": "0cfd6149-3839-4dca-9244-2363f5d37380", 00:07:32.290 "is_configured": true, 00:07:32.290 "data_offset": 2048, 00:07:32.290 "data_size": 63488 00:07:32.290 }, 00:07:32.290 { 00:07:32.290 "name": "BaseBdev2", 00:07:32.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.290 "is_configured": false, 00:07:32.290 "data_offset": 0, 00:07:32.290 "data_size": 0 00:07:32.290 } 00:07:32.290 ] 00:07:32.290 }' 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.290 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.860 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:32.860 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.860 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.860 [2024-11-19 12:27:37.873413] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:32.860 [2024-11-19 12:27:37.873544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:32.860 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.860 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:32.860 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.860 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.860 [2024-11-19 12:27:37.885415] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:32.860 [2024-11-19 12:27:37.887371] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:32.860 [2024-11-19 12:27:37.887447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:32.860 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.860 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:32.860 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:32.860 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:32.860 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.860 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.860 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:32.860 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.860 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.860 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.860 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.860 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.860 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.861 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.861 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.861 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.861 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.861 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.861 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.861 "name": "Existed_Raid", 00:07:32.861 "uuid": "e1aa09ff-434f-43f0-8fa4-0d30a5db4108", 00:07:32.861 "strip_size_kb": 64, 00:07:32.861 "state": "configuring", 00:07:32.861 "raid_level": "raid0", 00:07:32.861 "superblock": true, 00:07:32.861 "num_base_bdevs": 2, 00:07:32.861 "num_base_bdevs_discovered": 1, 00:07:32.861 "num_base_bdevs_operational": 2, 00:07:32.861 "base_bdevs_list": [ 00:07:32.861 { 00:07:32.861 "name": "BaseBdev1", 00:07:32.861 "uuid": "0cfd6149-3839-4dca-9244-2363f5d37380", 00:07:32.861 "is_configured": true, 00:07:32.861 "data_offset": 2048, 00:07:32.861 "data_size": 63488 00:07:32.861 }, 00:07:32.861 { 00:07:32.861 "name": "BaseBdev2", 00:07:32.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.861 "is_configured": false, 00:07:32.861 "data_offset": 0, 00:07:32.861 "data_size": 0 00:07:32.861 } 00:07:32.861 ] 00:07:32.861 }' 00:07:32.861 12:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.861 12:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.121 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:33.121 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.121 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.121 [2024-11-19 12:27:38.325291] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:33.121 [2024-11-19 12:27:38.325521] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:33.121 [2024-11-19 12:27:38.325537] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:33.121 [2024-11-19 12:27:38.325835] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:33.121 [2024-11-19 12:27:38.325981] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:33.121 [2024-11-19 12:27:38.325996] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:33.121 BaseBdev2 00:07:33.121 [2024-11-19 12:27:38.326121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.121 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.121 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:33.121 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:33.121 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:33.121 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:33.121 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:33.121 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:33.121 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:33.121 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.121 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.121 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.121 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:33.121 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.121 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.121 [ 00:07:33.121 { 00:07:33.121 "name": "BaseBdev2", 00:07:33.121 "aliases": [ 00:07:33.121 "758b3b6e-39ea-47e9-bf1a-81d144701e5a" 00:07:33.121 ], 00:07:33.121 "product_name": "Malloc disk", 00:07:33.121 "block_size": 512, 00:07:33.121 "num_blocks": 65536, 00:07:33.121 "uuid": "758b3b6e-39ea-47e9-bf1a-81d144701e5a", 00:07:33.121 "assigned_rate_limits": { 00:07:33.121 "rw_ios_per_sec": 0, 00:07:33.121 "rw_mbytes_per_sec": 0, 00:07:33.121 "r_mbytes_per_sec": 0, 00:07:33.121 "w_mbytes_per_sec": 0 00:07:33.121 }, 00:07:33.121 "claimed": true, 00:07:33.121 "claim_type": "exclusive_write", 00:07:33.121 "zoned": false, 00:07:33.121 "supported_io_types": { 00:07:33.121 "read": true, 00:07:33.121 "write": true, 00:07:33.121 "unmap": true, 00:07:33.121 "flush": true, 00:07:33.121 "reset": true, 00:07:33.121 "nvme_admin": false, 00:07:33.121 "nvme_io": false, 00:07:33.121 "nvme_io_md": false, 00:07:33.121 "write_zeroes": true, 00:07:33.121 "zcopy": true, 00:07:33.121 "get_zone_info": false, 00:07:33.121 "zone_management": false, 00:07:33.121 "zone_append": false, 00:07:33.121 "compare": false, 00:07:33.121 "compare_and_write": false, 00:07:33.121 "abort": true, 00:07:33.121 "seek_hole": false, 00:07:33.121 "seek_data": false, 00:07:33.121 "copy": true, 00:07:33.121 "nvme_iov_md": false 00:07:33.121 }, 00:07:33.121 "memory_domains": [ 00:07:33.121 { 00:07:33.121 "dma_device_id": "system", 00:07:33.121 "dma_device_type": 1 00:07:33.121 }, 00:07:33.121 { 00:07:33.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.121 "dma_device_type": 2 00:07:33.121 } 00:07:33.121 ], 00:07:33.121 "driver_specific": {} 00:07:33.121 } 00:07:33.121 ] 00:07:33.121 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.121 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:33.121 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:33.121 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:33.121 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:33.121 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.121 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:33.121 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:33.121 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.121 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.121 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.121 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.121 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.121 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.122 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.122 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.122 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.122 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.382 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.382 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.382 "name": "Existed_Raid", 00:07:33.382 "uuid": "e1aa09ff-434f-43f0-8fa4-0d30a5db4108", 00:07:33.382 "strip_size_kb": 64, 00:07:33.382 "state": "online", 00:07:33.382 "raid_level": "raid0", 00:07:33.382 "superblock": true, 00:07:33.382 "num_base_bdevs": 2, 00:07:33.382 "num_base_bdevs_discovered": 2, 00:07:33.382 "num_base_bdevs_operational": 2, 00:07:33.382 "base_bdevs_list": [ 00:07:33.382 { 00:07:33.382 "name": "BaseBdev1", 00:07:33.382 "uuid": "0cfd6149-3839-4dca-9244-2363f5d37380", 00:07:33.382 "is_configured": true, 00:07:33.382 "data_offset": 2048, 00:07:33.382 "data_size": 63488 00:07:33.382 }, 00:07:33.382 { 00:07:33.382 "name": "BaseBdev2", 00:07:33.382 "uuid": "758b3b6e-39ea-47e9-bf1a-81d144701e5a", 00:07:33.382 "is_configured": true, 00:07:33.382 "data_offset": 2048, 00:07:33.382 "data_size": 63488 00:07:33.382 } 00:07:33.382 ] 00:07:33.382 }' 00:07:33.382 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.382 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.642 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:33.642 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:33.642 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:33.642 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:33.642 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:33.642 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:33.642 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:33.642 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.642 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:33.642 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.642 [2024-11-19 12:27:38.780993] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.642 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.642 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:33.642 "name": "Existed_Raid", 00:07:33.642 "aliases": [ 00:07:33.642 "e1aa09ff-434f-43f0-8fa4-0d30a5db4108" 00:07:33.642 ], 00:07:33.642 "product_name": "Raid Volume", 00:07:33.642 "block_size": 512, 00:07:33.642 "num_blocks": 126976, 00:07:33.642 "uuid": "e1aa09ff-434f-43f0-8fa4-0d30a5db4108", 00:07:33.642 "assigned_rate_limits": { 00:07:33.642 "rw_ios_per_sec": 0, 00:07:33.642 "rw_mbytes_per_sec": 0, 00:07:33.642 "r_mbytes_per_sec": 0, 00:07:33.642 "w_mbytes_per_sec": 0 00:07:33.642 }, 00:07:33.642 "claimed": false, 00:07:33.642 "zoned": false, 00:07:33.642 "supported_io_types": { 00:07:33.642 "read": true, 00:07:33.642 "write": true, 00:07:33.642 "unmap": true, 00:07:33.642 "flush": true, 00:07:33.642 "reset": true, 00:07:33.642 "nvme_admin": false, 00:07:33.642 "nvme_io": false, 00:07:33.642 "nvme_io_md": false, 00:07:33.642 "write_zeroes": true, 00:07:33.642 "zcopy": false, 00:07:33.642 "get_zone_info": false, 00:07:33.642 "zone_management": false, 00:07:33.642 "zone_append": false, 00:07:33.642 "compare": false, 00:07:33.642 "compare_and_write": false, 00:07:33.642 "abort": false, 00:07:33.642 "seek_hole": false, 00:07:33.642 "seek_data": false, 00:07:33.642 "copy": false, 00:07:33.642 "nvme_iov_md": false 00:07:33.642 }, 00:07:33.642 "memory_domains": [ 00:07:33.642 { 00:07:33.642 "dma_device_id": "system", 00:07:33.642 "dma_device_type": 1 00:07:33.642 }, 00:07:33.642 { 00:07:33.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.643 "dma_device_type": 2 00:07:33.643 }, 00:07:33.643 { 00:07:33.643 "dma_device_id": "system", 00:07:33.643 "dma_device_type": 1 00:07:33.643 }, 00:07:33.643 { 00:07:33.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.643 "dma_device_type": 2 00:07:33.643 } 00:07:33.643 ], 00:07:33.643 "driver_specific": { 00:07:33.643 "raid": { 00:07:33.643 "uuid": "e1aa09ff-434f-43f0-8fa4-0d30a5db4108", 00:07:33.643 "strip_size_kb": 64, 00:07:33.643 "state": "online", 00:07:33.643 "raid_level": "raid0", 00:07:33.643 "superblock": true, 00:07:33.643 "num_base_bdevs": 2, 00:07:33.643 "num_base_bdevs_discovered": 2, 00:07:33.643 "num_base_bdevs_operational": 2, 00:07:33.643 "base_bdevs_list": [ 00:07:33.643 { 00:07:33.643 "name": "BaseBdev1", 00:07:33.643 "uuid": "0cfd6149-3839-4dca-9244-2363f5d37380", 00:07:33.643 "is_configured": true, 00:07:33.643 "data_offset": 2048, 00:07:33.643 "data_size": 63488 00:07:33.643 }, 00:07:33.643 { 00:07:33.643 "name": "BaseBdev2", 00:07:33.643 "uuid": "758b3b6e-39ea-47e9-bf1a-81d144701e5a", 00:07:33.643 "is_configured": true, 00:07:33.643 "data_offset": 2048, 00:07:33.643 "data_size": 63488 00:07:33.643 } 00:07:33.643 ] 00:07:33.643 } 00:07:33.643 } 00:07:33.643 }' 00:07:33.643 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:33.643 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:33.643 BaseBdev2' 00:07:33.643 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.643 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:33.643 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:33.643 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.643 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:33.643 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.643 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.903 [2024-11-19 12:27:38.964311] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:33.903 [2024-11-19 12:27:38.964402] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:33.903 [2024-11-19 12:27:38.964473] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.903 12:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.904 12:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.904 12:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.904 "name": "Existed_Raid", 00:07:33.904 "uuid": "e1aa09ff-434f-43f0-8fa4-0d30a5db4108", 00:07:33.904 "strip_size_kb": 64, 00:07:33.904 "state": "offline", 00:07:33.904 "raid_level": "raid0", 00:07:33.904 "superblock": true, 00:07:33.904 "num_base_bdevs": 2, 00:07:33.904 "num_base_bdevs_discovered": 1, 00:07:33.904 "num_base_bdevs_operational": 1, 00:07:33.904 "base_bdevs_list": [ 00:07:33.904 { 00:07:33.904 "name": null, 00:07:33.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.904 "is_configured": false, 00:07:33.904 "data_offset": 0, 00:07:33.904 "data_size": 63488 00:07:33.904 }, 00:07:33.904 { 00:07:33.904 "name": "BaseBdev2", 00:07:33.904 "uuid": "758b3b6e-39ea-47e9-bf1a-81d144701e5a", 00:07:33.904 "is_configured": true, 00:07:33.904 "data_offset": 2048, 00:07:33.904 "data_size": 63488 00:07:33.904 } 00:07:33.904 ] 00:07:33.904 }' 00:07:33.904 12:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.904 12:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.163 12:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:34.163 12:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:34.163 12:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:34.163 12:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.163 12:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.163 12:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.422 12:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.422 12:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:34.422 12:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:34.422 12:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:34.422 12:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.422 12:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.422 [2024-11-19 12:27:39.464729] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:34.422 [2024-11-19 12:27:39.464848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:34.422 12:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.422 12:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:34.422 12:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:34.422 12:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.422 12:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:34.422 12:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.422 12:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.422 12:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.422 12:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:34.422 12:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:34.422 12:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:34.422 12:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72514 00:07:34.422 12:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 72514 ']' 00:07:34.422 12:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 72514 00:07:34.422 12:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:34.422 12:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.422 12:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72514 00:07:34.422 killing process with pid 72514 00:07:34.423 12:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:34.423 12:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:34.423 12:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72514' 00:07:34.423 12:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 72514 00:07:34.423 [2024-11-19 12:27:39.587573] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:34.423 12:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 72514 00:07:34.423 [2024-11-19 12:27:39.589233] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:34.993 12:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:34.993 00:07:34.993 real 0m4.021s 00:07:34.993 user 0m6.192s 00:07:34.993 sys 0m0.812s 00:07:34.993 12:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.993 12:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.993 ************************************ 00:07:34.993 END TEST raid_state_function_test_sb 00:07:34.993 ************************************ 00:07:34.993 12:27:40 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:34.993 12:27:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:34.993 12:27:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.993 12:27:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:34.993 ************************************ 00:07:34.993 START TEST raid_superblock_test 00:07:34.993 ************************************ 00:07:34.993 12:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:07:34.993 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:34.993 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:34.993 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:34.993 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:34.993 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:34.993 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:34.993 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:34.993 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:34.993 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:34.993 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:34.993 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:34.993 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:34.993 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:34.993 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:34.993 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:34.993 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:34.993 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72755 00:07:34.993 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:34.993 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72755 00:07:34.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.993 12:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72755 ']' 00:07:34.993 12:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.993 12:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:34.993 12:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.993 12:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:34.993 12:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.993 [2024-11-19 12:27:40.125454] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:34.993 [2024-11-19 12:27:40.125655] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72755 ] 00:07:35.253 [2024-11-19 12:27:40.285419] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.253 [2024-11-19 12:27:40.360516] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.253 [2024-11-19 12:27:40.437741] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.254 [2024-11-19 12:27:40.437915] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.824 12:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:35.824 12:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:35.824 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:35.824 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:35.824 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:35.824 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:35.824 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:35.824 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:35.824 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:35.824 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:35.824 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:35.824 12:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.824 12:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.824 malloc1 00:07:35.824 12:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.824 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:35.824 12:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.824 12:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.824 [2024-11-19 12:27:40.989392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:35.824 [2024-11-19 12:27:40.989494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.824 [2024-11-19 12:27:40.989523] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:35.824 [2024-11-19 12:27:40.989550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.824 [2024-11-19 12:27:40.992005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.824 [2024-11-19 12:27:40.992101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:35.824 pt1 00:07:35.824 12:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.824 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:35.824 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:35.824 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:35.824 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:35.824 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:35.824 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:35.824 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:35.824 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:35.824 12:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:35.824 12:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.824 12:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.824 malloc2 00:07:35.824 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.824 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:35.824 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.824 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.824 [2024-11-19 12:27:41.033445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:35.824 [2024-11-19 12:27:41.033569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.824 [2024-11-19 12:27:41.033616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:35.824 [2024-11-19 12:27:41.033661] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.824 [2024-11-19 12:27:41.036429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.824 [2024-11-19 12:27:41.036512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:35.824 pt2 00:07:35.824 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.824 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:35.824 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:35.824 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:35.824 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.824 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.824 [2024-11-19 12:27:41.045472] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:35.824 [2024-11-19 12:27:41.047627] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:35.824 [2024-11-19 12:27:41.047838] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:35.824 [2024-11-19 12:27:41.047895] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:35.824 [2024-11-19 12:27:41.048190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:35.824 [2024-11-19 12:27:41.048377] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:35.824 [2024-11-19 12:27:41.048425] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:07:35.824 [2024-11-19 12:27:41.048609] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.825 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.825 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:35.825 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:35.825 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:35.825 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:35.825 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:35.825 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:35.825 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.825 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.825 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.825 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.825 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.825 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:35.825 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.825 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.825 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.085 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.085 "name": "raid_bdev1", 00:07:36.085 "uuid": "bc18434f-ee22-4f1b-8dd4-33044dddce7e", 00:07:36.085 "strip_size_kb": 64, 00:07:36.085 "state": "online", 00:07:36.085 "raid_level": "raid0", 00:07:36.085 "superblock": true, 00:07:36.085 "num_base_bdevs": 2, 00:07:36.085 "num_base_bdevs_discovered": 2, 00:07:36.085 "num_base_bdevs_operational": 2, 00:07:36.085 "base_bdevs_list": [ 00:07:36.085 { 00:07:36.085 "name": "pt1", 00:07:36.085 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:36.085 "is_configured": true, 00:07:36.085 "data_offset": 2048, 00:07:36.085 "data_size": 63488 00:07:36.085 }, 00:07:36.085 { 00:07:36.085 "name": "pt2", 00:07:36.085 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:36.085 "is_configured": true, 00:07:36.085 "data_offset": 2048, 00:07:36.085 "data_size": 63488 00:07:36.085 } 00:07:36.085 ] 00:07:36.085 }' 00:07:36.085 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.085 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.345 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:36.345 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:36.345 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:36.345 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:36.345 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:36.345 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:36.345 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:36.345 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:36.345 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.345 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.345 [2024-11-19 12:27:41.477187] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:36.345 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.345 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:36.345 "name": "raid_bdev1", 00:07:36.345 "aliases": [ 00:07:36.345 "bc18434f-ee22-4f1b-8dd4-33044dddce7e" 00:07:36.345 ], 00:07:36.345 "product_name": "Raid Volume", 00:07:36.345 "block_size": 512, 00:07:36.345 "num_blocks": 126976, 00:07:36.345 "uuid": "bc18434f-ee22-4f1b-8dd4-33044dddce7e", 00:07:36.345 "assigned_rate_limits": { 00:07:36.345 "rw_ios_per_sec": 0, 00:07:36.345 "rw_mbytes_per_sec": 0, 00:07:36.345 "r_mbytes_per_sec": 0, 00:07:36.345 "w_mbytes_per_sec": 0 00:07:36.345 }, 00:07:36.345 "claimed": false, 00:07:36.345 "zoned": false, 00:07:36.345 "supported_io_types": { 00:07:36.345 "read": true, 00:07:36.345 "write": true, 00:07:36.345 "unmap": true, 00:07:36.345 "flush": true, 00:07:36.345 "reset": true, 00:07:36.345 "nvme_admin": false, 00:07:36.345 "nvme_io": false, 00:07:36.345 "nvme_io_md": false, 00:07:36.345 "write_zeroes": true, 00:07:36.345 "zcopy": false, 00:07:36.345 "get_zone_info": false, 00:07:36.345 "zone_management": false, 00:07:36.345 "zone_append": false, 00:07:36.345 "compare": false, 00:07:36.345 "compare_and_write": false, 00:07:36.345 "abort": false, 00:07:36.345 "seek_hole": false, 00:07:36.345 "seek_data": false, 00:07:36.345 "copy": false, 00:07:36.345 "nvme_iov_md": false 00:07:36.345 }, 00:07:36.345 "memory_domains": [ 00:07:36.345 { 00:07:36.345 "dma_device_id": "system", 00:07:36.345 "dma_device_type": 1 00:07:36.345 }, 00:07:36.345 { 00:07:36.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.345 "dma_device_type": 2 00:07:36.345 }, 00:07:36.345 { 00:07:36.345 "dma_device_id": "system", 00:07:36.345 "dma_device_type": 1 00:07:36.345 }, 00:07:36.345 { 00:07:36.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.345 "dma_device_type": 2 00:07:36.345 } 00:07:36.345 ], 00:07:36.345 "driver_specific": { 00:07:36.345 "raid": { 00:07:36.345 "uuid": "bc18434f-ee22-4f1b-8dd4-33044dddce7e", 00:07:36.345 "strip_size_kb": 64, 00:07:36.345 "state": "online", 00:07:36.345 "raid_level": "raid0", 00:07:36.345 "superblock": true, 00:07:36.345 "num_base_bdevs": 2, 00:07:36.345 "num_base_bdevs_discovered": 2, 00:07:36.345 "num_base_bdevs_operational": 2, 00:07:36.345 "base_bdevs_list": [ 00:07:36.345 { 00:07:36.345 "name": "pt1", 00:07:36.345 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:36.345 "is_configured": true, 00:07:36.345 "data_offset": 2048, 00:07:36.345 "data_size": 63488 00:07:36.345 }, 00:07:36.345 { 00:07:36.345 "name": "pt2", 00:07:36.345 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:36.345 "is_configured": true, 00:07:36.345 "data_offset": 2048, 00:07:36.345 "data_size": 63488 00:07:36.345 } 00:07:36.345 ] 00:07:36.345 } 00:07:36.345 } 00:07:36.346 }' 00:07:36.346 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:36.346 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:36.346 pt2' 00:07:36.346 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:36.346 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:36.346 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:36.346 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:36.346 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.346 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.346 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:36.346 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.606 [2024-11-19 12:27:41.684595] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bc18434f-ee22-4f1b-8dd4-33044dddce7e 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bc18434f-ee22-4f1b-8dd4-33044dddce7e ']' 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.606 [2024-11-19 12:27:41.728240] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:36.606 [2024-11-19 12:27:41.728278] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:36.606 [2024-11-19 12:27:41.728380] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:36.606 [2024-11-19 12:27:41.728451] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:36.606 [2024-11-19 12:27:41.728473] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.606 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.867 [2024-11-19 12:27:41.872072] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:36.867 [2024-11-19 12:27:41.874342] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:36.867 [2024-11-19 12:27:41.874432] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:36.867 [2024-11-19 12:27:41.874492] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:36.867 [2024-11-19 12:27:41.874512] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:36.867 [2024-11-19 12:27:41.874523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:07:36.867 request: 00:07:36.867 { 00:07:36.867 "name": "raid_bdev1", 00:07:36.867 "raid_level": "raid0", 00:07:36.867 "base_bdevs": [ 00:07:36.867 "malloc1", 00:07:36.867 "malloc2" 00:07:36.867 ], 00:07:36.867 "strip_size_kb": 64, 00:07:36.867 "superblock": false, 00:07:36.867 "method": "bdev_raid_create", 00:07:36.867 "req_id": 1 00:07:36.867 } 00:07:36.867 Got JSON-RPC error response 00:07:36.867 response: 00:07:36.867 { 00:07:36.867 "code": -17, 00:07:36.867 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:36.867 } 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.867 [2024-11-19 12:27:41.931914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:36.867 [2024-11-19 12:27:41.932015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:36.867 [2024-11-19 12:27:41.932059] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:36.867 [2024-11-19 12:27:41.932092] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:36.867 [2024-11-19 12:27:41.934651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:36.867 [2024-11-19 12:27:41.934729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:36.867 [2024-11-19 12:27:41.934854] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:36.867 [2024-11-19 12:27:41.934906] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:36.867 pt1 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.867 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.867 "name": "raid_bdev1", 00:07:36.867 "uuid": "bc18434f-ee22-4f1b-8dd4-33044dddce7e", 00:07:36.867 "strip_size_kb": 64, 00:07:36.867 "state": "configuring", 00:07:36.867 "raid_level": "raid0", 00:07:36.867 "superblock": true, 00:07:36.867 "num_base_bdevs": 2, 00:07:36.867 "num_base_bdevs_discovered": 1, 00:07:36.867 "num_base_bdevs_operational": 2, 00:07:36.867 "base_bdevs_list": [ 00:07:36.867 { 00:07:36.867 "name": "pt1", 00:07:36.867 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:36.867 "is_configured": true, 00:07:36.867 "data_offset": 2048, 00:07:36.867 "data_size": 63488 00:07:36.867 }, 00:07:36.867 { 00:07:36.867 "name": null, 00:07:36.867 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:36.867 "is_configured": false, 00:07:36.867 "data_offset": 2048, 00:07:36.867 "data_size": 63488 00:07:36.867 } 00:07:36.868 ] 00:07:36.868 }' 00:07:36.868 12:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.868 12:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.128 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:37.128 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:37.128 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:37.128 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:37.128 12:27:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.128 12:27:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.128 [2024-11-19 12:27:42.379214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:37.128 [2024-11-19 12:27:42.379363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.128 [2024-11-19 12:27:42.379415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:37.128 [2024-11-19 12:27:42.379455] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.128 [2024-11-19 12:27:42.380035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.128 [2024-11-19 12:27:42.380112] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:37.128 [2024-11-19 12:27:42.380246] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:37.128 [2024-11-19 12:27:42.380304] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:37.128 [2024-11-19 12:27:42.380446] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:37.128 [2024-11-19 12:27:42.380489] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:37.128 [2024-11-19 12:27:42.380799] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:37.128 [2024-11-19 12:27:42.380973] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:37.128 [2024-11-19 12:27:42.381028] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:37.128 [2024-11-19 12:27:42.381197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.128 pt2 00:07:37.128 12:27:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.128 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:37.128 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:37.128 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:37.128 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:37.128 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.128 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.388 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.388 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.388 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.388 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.388 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.388 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.388 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.388 12:27:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.388 12:27:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.388 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:37.388 12:27:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.388 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.388 "name": "raid_bdev1", 00:07:37.388 "uuid": "bc18434f-ee22-4f1b-8dd4-33044dddce7e", 00:07:37.388 "strip_size_kb": 64, 00:07:37.388 "state": "online", 00:07:37.388 "raid_level": "raid0", 00:07:37.388 "superblock": true, 00:07:37.388 "num_base_bdevs": 2, 00:07:37.388 "num_base_bdevs_discovered": 2, 00:07:37.388 "num_base_bdevs_operational": 2, 00:07:37.388 "base_bdevs_list": [ 00:07:37.388 { 00:07:37.388 "name": "pt1", 00:07:37.388 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:37.388 "is_configured": true, 00:07:37.388 "data_offset": 2048, 00:07:37.388 "data_size": 63488 00:07:37.388 }, 00:07:37.388 { 00:07:37.388 "name": "pt2", 00:07:37.388 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:37.388 "is_configured": true, 00:07:37.388 "data_offset": 2048, 00:07:37.388 "data_size": 63488 00:07:37.388 } 00:07:37.388 ] 00:07:37.388 }' 00:07:37.388 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.388 12:27:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.648 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:37.648 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:37.648 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:37.648 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:37.648 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:37.648 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:37.648 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:37.648 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:37.648 12:27:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.648 12:27:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.648 [2024-11-19 12:27:42.838823] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:37.648 12:27:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.648 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:37.648 "name": "raid_bdev1", 00:07:37.648 "aliases": [ 00:07:37.648 "bc18434f-ee22-4f1b-8dd4-33044dddce7e" 00:07:37.648 ], 00:07:37.648 "product_name": "Raid Volume", 00:07:37.648 "block_size": 512, 00:07:37.648 "num_blocks": 126976, 00:07:37.648 "uuid": "bc18434f-ee22-4f1b-8dd4-33044dddce7e", 00:07:37.648 "assigned_rate_limits": { 00:07:37.648 "rw_ios_per_sec": 0, 00:07:37.648 "rw_mbytes_per_sec": 0, 00:07:37.648 "r_mbytes_per_sec": 0, 00:07:37.648 "w_mbytes_per_sec": 0 00:07:37.648 }, 00:07:37.648 "claimed": false, 00:07:37.648 "zoned": false, 00:07:37.648 "supported_io_types": { 00:07:37.648 "read": true, 00:07:37.648 "write": true, 00:07:37.648 "unmap": true, 00:07:37.648 "flush": true, 00:07:37.648 "reset": true, 00:07:37.648 "nvme_admin": false, 00:07:37.648 "nvme_io": false, 00:07:37.648 "nvme_io_md": false, 00:07:37.648 "write_zeroes": true, 00:07:37.648 "zcopy": false, 00:07:37.648 "get_zone_info": false, 00:07:37.648 "zone_management": false, 00:07:37.648 "zone_append": false, 00:07:37.648 "compare": false, 00:07:37.648 "compare_and_write": false, 00:07:37.648 "abort": false, 00:07:37.648 "seek_hole": false, 00:07:37.648 "seek_data": false, 00:07:37.648 "copy": false, 00:07:37.648 "nvme_iov_md": false 00:07:37.648 }, 00:07:37.648 "memory_domains": [ 00:07:37.648 { 00:07:37.648 "dma_device_id": "system", 00:07:37.648 "dma_device_type": 1 00:07:37.648 }, 00:07:37.648 { 00:07:37.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.649 "dma_device_type": 2 00:07:37.649 }, 00:07:37.649 { 00:07:37.649 "dma_device_id": "system", 00:07:37.649 "dma_device_type": 1 00:07:37.649 }, 00:07:37.649 { 00:07:37.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.649 "dma_device_type": 2 00:07:37.649 } 00:07:37.649 ], 00:07:37.649 "driver_specific": { 00:07:37.649 "raid": { 00:07:37.649 "uuid": "bc18434f-ee22-4f1b-8dd4-33044dddce7e", 00:07:37.649 "strip_size_kb": 64, 00:07:37.649 "state": "online", 00:07:37.649 "raid_level": "raid0", 00:07:37.649 "superblock": true, 00:07:37.649 "num_base_bdevs": 2, 00:07:37.649 "num_base_bdevs_discovered": 2, 00:07:37.649 "num_base_bdevs_operational": 2, 00:07:37.649 "base_bdevs_list": [ 00:07:37.649 { 00:07:37.649 "name": "pt1", 00:07:37.649 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:37.649 "is_configured": true, 00:07:37.649 "data_offset": 2048, 00:07:37.649 "data_size": 63488 00:07:37.649 }, 00:07:37.649 { 00:07:37.649 "name": "pt2", 00:07:37.649 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:37.649 "is_configured": true, 00:07:37.649 "data_offset": 2048, 00:07:37.649 "data_size": 63488 00:07:37.649 } 00:07:37.649 ] 00:07:37.649 } 00:07:37.649 } 00:07:37.649 }' 00:07:37.649 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:37.909 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:37.909 pt2' 00:07:37.909 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:37.909 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:37.909 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:37.909 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:37.909 12:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:37.909 12:27:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.909 12:27:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.909 12:27:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.909 12:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:37.909 12:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:37.909 12:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:37.909 12:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:37.909 12:27:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.909 12:27:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.909 12:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:37.909 12:27:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.909 12:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:37.909 12:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:37.909 12:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:37.909 12:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:37.909 12:27:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.909 12:27:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.909 [2024-11-19 12:27:43.090306] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:37.909 12:27:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.909 12:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bc18434f-ee22-4f1b-8dd4-33044dddce7e '!=' bc18434f-ee22-4f1b-8dd4-33044dddce7e ']' 00:07:37.909 12:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:37.909 12:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:37.909 12:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:37.909 12:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72755 00:07:37.909 12:27:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72755 ']' 00:07:37.909 12:27:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72755 00:07:37.909 12:27:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:37.909 12:27:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:37.909 12:27:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72755 00:07:38.169 12:27:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:38.169 12:27:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:38.169 12:27:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72755' 00:07:38.169 killing process with pid 72755 00:07:38.169 12:27:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 72755 00:07:38.169 [2024-11-19 12:27:43.178995] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:38.169 12:27:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 72755 00:07:38.169 [2024-11-19 12:27:43.179226] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:38.169 [2024-11-19 12:27:43.179304] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:38.169 [2024-11-19 12:27:43.179376] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:38.169 [2024-11-19 12:27:43.223517] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:38.428 12:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:38.428 00:07:38.428 real 0m3.567s 00:07:38.428 user 0m5.249s 00:07:38.428 sys 0m0.841s 00:07:38.428 12:27:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.428 12:27:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.428 ************************************ 00:07:38.428 END TEST raid_superblock_test 00:07:38.428 ************************************ 00:07:38.428 12:27:43 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:38.428 12:27:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:38.428 12:27:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.428 12:27:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:38.428 ************************************ 00:07:38.428 START TEST raid_read_error_test 00:07:38.428 ************************************ 00:07:38.428 12:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:07:38.428 12:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:38.428 12:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:38.428 12:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:38.428 12:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:38.428 12:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.428 12:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:38.428 12:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:38.428 12:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.428 12:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:38.428 12:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:38.428 12:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.688 12:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:38.688 12:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:38.688 12:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:38.688 12:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:38.688 12:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:38.688 12:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:38.688 12:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:38.688 12:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:38.688 12:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:38.688 12:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:38.688 12:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:38.688 12:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rR4fruINgU 00:07:38.688 12:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72950 00:07:38.688 12:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72950 00:07:38.688 12:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:38.688 12:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 72950 ']' 00:07:38.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.688 12:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.688 12:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:38.688 12:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.688 12:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:38.688 12:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.688 [2024-11-19 12:27:43.790876] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:38.688 [2024-11-19 12:27:43.791025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72950 ] 00:07:38.955 [2024-11-19 12:27:43.959175] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.955 [2024-11-19 12:27:44.036360] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.955 [2024-11-19 12:27:44.113730] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.955 [2024-11-19 12:27:44.113786] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.538 12:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.538 12:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:39.538 12:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:39.538 12:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:39.538 12:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.538 12:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.538 BaseBdev1_malloc 00:07:39.538 12:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.538 12:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:39.538 12:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.538 12:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.538 true 00:07:39.538 12:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.538 12:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:39.538 12:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.538 12:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.538 [2024-11-19 12:27:44.649029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:39.538 [2024-11-19 12:27:44.649104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.539 [2024-11-19 12:27:44.649135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:39.539 [2024-11-19 12:27:44.649149] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.539 [2024-11-19 12:27:44.651731] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.539 [2024-11-19 12:27:44.651786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:39.539 BaseBdev1 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.539 BaseBdev2_malloc 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.539 true 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.539 [2024-11-19 12:27:44.706057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:39.539 [2024-11-19 12:27:44.706121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.539 [2024-11-19 12:27:44.706145] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:39.539 [2024-11-19 12:27:44.706156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.539 [2024-11-19 12:27:44.708784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.539 [2024-11-19 12:27:44.708824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:39.539 BaseBdev2 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.539 [2024-11-19 12:27:44.718113] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:39.539 [2024-11-19 12:27:44.720333] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:39.539 [2024-11-19 12:27:44.720623] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:39.539 [2024-11-19 12:27:44.720652] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:39.539 [2024-11-19 12:27:44.720957] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:39.539 [2024-11-19 12:27:44.721123] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:39.539 [2024-11-19 12:27:44.721139] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:39.539 [2024-11-19 12:27:44.721287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.539 "name": "raid_bdev1", 00:07:39.539 "uuid": "647076b7-be83-4dc0-81e0-252dbe383748", 00:07:39.539 "strip_size_kb": 64, 00:07:39.539 "state": "online", 00:07:39.539 "raid_level": "raid0", 00:07:39.539 "superblock": true, 00:07:39.539 "num_base_bdevs": 2, 00:07:39.539 "num_base_bdevs_discovered": 2, 00:07:39.539 "num_base_bdevs_operational": 2, 00:07:39.539 "base_bdevs_list": [ 00:07:39.539 { 00:07:39.539 "name": "BaseBdev1", 00:07:39.539 "uuid": "923afad7-6844-55f6-9e30-da00e0e4a135", 00:07:39.539 "is_configured": true, 00:07:39.539 "data_offset": 2048, 00:07:39.539 "data_size": 63488 00:07:39.539 }, 00:07:39.539 { 00:07:39.539 "name": "BaseBdev2", 00:07:39.539 "uuid": "0beb8854-0e3d-525c-9e42-b123e2030a2f", 00:07:39.539 "is_configured": true, 00:07:39.539 "data_offset": 2048, 00:07:39.539 "data_size": 63488 00:07:39.539 } 00:07:39.539 ] 00:07:39.539 }' 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.539 12:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.109 12:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:40.109 12:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:40.109 [2024-11-19 12:27:45.229727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:41.049 12:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:41.049 12:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.049 12:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.049 12:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.049 12:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:41.049 12:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:41.049 12:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:41.049 12:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:41.049 12:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:41.049 12:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:41.049 12:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:41.049 12:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.049 12:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.049 12:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.049 12:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.049 12:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.049 12:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.049 12:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.049 12:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:41.049 12:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.049 12:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.049 12:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.049 12:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.049 "name": "raid_bdev1", 00:07:41.049 "uuid": "647076b7-be83-4dc0-81e0-252dbe383748", 00:07:41.049 "strip_size_kb": 64, 00:07:41.049 "state": "online", 00:07:41.049 "raid_level": "raid0", 00:07:41.049 "superblock": true, 00:07:41.049 "num_base_bdevs": 2, 00:07:41.049 "num_base_bdevs_discovered": 2, 00:07:41.049 "num_base_bdevs_operational": 2, 00:07:41.049 "base_bdevs_list": [ 00:07:41.049 { 00:07:41.049 "name": "BaseBdev1", 00:07:41.049 "uuid": "923afad7-6844-55f6-9e30-da00e0e4a135", 00:07:41.049 "is_configured": true, 00:07:41.049 "data_offset": 2048, 00:07:41.049 "data_size": 63488 00:07:41.049 }, 00:07:41.049 { 00:07:41.049 "name": "BaseBdev2", 00:07:41.049 "uuid": "0beb8854-0e3d-525c-9e42-b123e2030a2f", 00:07:41.049 "is_configured": true, 00:07:41.049 "data_offset": 2048, 00:07:41.049 "data_size": 63488 00:07:41.049 } 00:07:41.049 ] 00:07:41.049 }' 00:07:41.049 12:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.049 12:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.617 12:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:41.617 12:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.617 12:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.617 [2024-11-19 12:27:46.651144] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:41.617 [2024-11-19 12:27:46.651190] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:41.617 [2024-11-19 12:27:46.653970] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:41.617 [2024-11-19 12:27:46.654062] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.617 [2024-11-19 12:27:46.654151] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:41.617 [2024-11-19 12:27:46.654208] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:41.617 12:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.617 { 00:07:41.617 "results": [ 00:07:41.617 { 00:07:41.617 "job": "raid_bdev1", 00:07:41.617 "core_mask": "0x1", 00:07:41.617 "workload": "randrw", 00:07:41.617 "percentage": 50, 00:07:41.617 "status": "finished", 00:07:41.617 "queue_depth": 1, 00:07:41.617 "io_size": 131072, 00:07:41.617 "runtime": 1.421926, 00:07:41.617 "iops": 14211.710032730254, 00:07:41.617 "mibps": 1776.4637540912818, 00:07:41.617 "io_failed": 1, 00:07:41.617 "io_timeout": 0, 00:07:41.617 "avg_latency_us": 98.61818338969127, 00:07:41.617 "min_latency_us": 26.606113537117903, 00:07:41.617 "max_latency_us": 1409.4532751091704 00:07:41.617 } 00:07:41.617 ], 00:07:41.617 "core_count": 1 00:07:41.617 } 00:07:41.617 12:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72950 00:07:41.617 12:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 72950 ']' 00:07:41.617 12:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 72950 00:07:41.617 12:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:41.617 12:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:41.617 12:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72950 00:07:41.617 12:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:41.617 12:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:41.617 12:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72950' 00:07:41.617 killing process with pid 72950 00:07:41.617 12:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 72950 00:07:41.618 [2024-11-19 12:27:46.699683] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:41.618 12:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 72950 00:07:41.618 [2024-11-19 12:27:46.728789] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:41.877 12:27:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:41.877 12:27:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:41.877 12:27:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rR4fruINgU 00:07:41.877 12:27:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:07:41.877 12:27:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:41.877 ************************************ 00:07:41.877 END TEST raid_read_error_test 00:07:41.877 ************************************ 00:07:41.877 12:27:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:41.877 12:27:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:41.877 12:27:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:07:41.877 00:07:41.877 real 0m3.437s 00:07:41.877 user 0m4.179s 00:07:41.877 sys 0m0.649s 00:07:41.877 12:27:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.877 12:27:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.137 12:27:47 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:42.137 12:27:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:42.137 12:27:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.137 12:27:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:42.137 ************************************ 00:07:42.137 START TEST raid_write_error_test 00:07:42.137 ************************************ 00:07:42.137 12:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:07:42.137 12:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:42.137 12:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:42.137 12:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:42.137 12:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:42.137 12:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:42.137 12:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:42.137 12:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:42.137 12:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:42.137 12:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:42.137 12:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:42.137 12:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:42.137 12:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:42.137 12:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:42.137 12:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:42.137 12:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:42.137 12:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:42.137 12:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:42.137 12:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:42.137 12:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:42.137 12:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:42.137 12:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:42.137 12:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:42.138 12:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.HhAW9sYhBI 00:07:42.138 12:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73090 00:07:42.138 12:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:42.138 12:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73090 00:07:42.138 12:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 73090 ']' 00:07:42.138 12:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.138 12:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:42.138 12:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.138 12:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:42.138 12:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.138 [2024-11-19 12:27:47.298245] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:42.138 [2024-11-19 12:27:47.298511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73090 ] 00:07:42.397 [2024-11-19 12:27:47.464914] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.397 [2024-11-19 12:27:47.542353] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.397 [2024-11-19 12:27:47.620899] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.397 [2024-11-19 12:27:47.620955] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.966 12:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.967 12:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:42.967 12:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:42.967 12:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:42.967 12:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.967 12:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.967 BaseBdev1_malloc 00:07:42.967 12:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.967 12:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:42.967 12:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.967 12:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.967 true 00:07:42.967 12:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.967 12:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:42.967 12:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.967 12:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.967 [2024-11-19 12:27:48.176615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:42.967 [2024-11-19 12:27:48.176682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:42.967 [2024-11-19 12:27:48.176705] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:42.967 [2024-11-19 12:27:48.176717] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:42.967 [2024-11-19 12:27:48.179284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:42.967 [2024-11-19 12:27:48.179398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:42.967 BaseBdev1 00:07:42.967 12:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.967 12:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:42.967 12:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:42.967 12:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.967 12:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.967 BaseBdev2_malloc 00:07:42.967 12:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.967 12:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:42.967 12:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.967 12:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.227 true 00:07:43.227 12:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.227 12:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:43.227 12:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.227 12:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.227 [2024-11-19 12:27:48.235165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:43.227 [2024-11-19 12:27:48.235229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:43.227 [2024-11-19 12:27:48.235255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:43.227 [2024-11-19 12:27:48.235267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:43.227 [2024-11-19 12:27:48.237813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:43.227 [2024-11-19 12:27:48.237849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:43.227 BaseBdev2 00:07:43.227 12:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.227 12:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:43.227 12:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.227 12:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.227 [2024-11-19 12:27:48.247184] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:43.227 [2024-11-19 12:27:48.249517] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:43.227 [2024-11-19 12:27:48.249790] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:43.227 [2024-11-19 12:27:48.249816] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:43.227 [2024-11-19 12:27:48.250121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:43.227 [2024-11-19 12:27:48.250270] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:43.227 [2024-11-19 12:27:48.250288] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:43.227 [2024-11-19 12:27:48.250423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.227 12:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.227 12:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:43.227 12:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:43.227 12:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:43.227 12:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:43.227 12:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.227 12:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.227 12:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.227 12:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.227 12:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.227 12:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.227 12:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.227 12:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:43.227 12:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.227 12:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.227 12:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.227 12:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.227 "name": "raid_bdev1", 00:07:43.227 "uuid": "9f423f9d-ccd8-49c6-bf17-df1688ffd5e2", 00:07:43.227 "strip_size_kb": 64, 00:07:43.227 "state": "online", 00:07:43.227 "raid_level": "raid0", 00:07:43.227 "superblock": true, 00:07:43.227 "num_base_bdevs": 2, 00:07:43.227 "num_base_bdevs_discovered": 2, 00:07:43.227 "num_base_bdevs_operational": 2, 00:07:43.227 "base_bdevs_list": [ 00:07:43.227 { 00:07:43.227 "name": "BaseBdev1", 00:07:43.227 "uuid": "b2f232d3-72ec-5a5e-9205-32bdfbcf265e", 00:07:43.227 "is_configured": true, 00:07:43.227 "data_offset": 2048, 00:07:43.227 "data_size": 63488 00:07:43.227 }, 00:07:43.227 { 00:07:43.227 "name": "BaseBdev2", 00:07:43.227 "uuid": "98652319-7d08-568a-a20b-f3052640b480", 00:07:43.228 "is_configured": true, 00:07:43.228 "data_offset": 2048, 00:07:43.228 "data_size": 63488 00:07:43.228 } 00:07:43.228 ] 00:07:43.228 }' 00:07:43.228 12:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.228 12:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.487 12:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:43.487 12:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:43.747 [2024-11-19 12:27:48.750981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:44.685 12:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:44.685 12:27:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.685 12:27:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.685 12:27:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.685 12:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:44.685 12:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:44.685 12:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:44.685 12:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:44.685 12:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:44.685 12:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:44.685 12:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:44.685 12:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.685 12:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.685 12:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.685 12:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.685 12:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.685 12:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.685 12:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.685 12:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:44.685 12:27:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.685 12:27:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.685 12:27:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.685 12:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.685 "name": "raid_bdev1", 00:07:44.685 "uuid": "9f423f9d-ccd8-49c6-bf17-df1688ffd5e2", 00:07:44.685 "strip_size_kb": 64, 00:07:44.685 "state": "online", 00:07:44.685 "raid_level": "raid0", 00:07:44.685 "superblock": true, 00:07:44.685 "num_base_bdevs": 2, 00:07:44.685 "num_base_bdevs_discovered": 2, 00:07:44.685 "num_base_bdevs_operational": 2, 00:07:44.685 "base_bdevs_list": [ 00:07:44.685 { 00:07:44.685 "name": "BaseBdev1", 00:07:44.685 "uuid": "b2f232d3-72ec-5a5e-9205-32bdfbcf265e", 00:07:44.685 "is_configured": true, 00:07:44.685 "data_offset": 2048, 00:07:44.685 "data_size": 63488 00:07:44.685 }, 00:07:44.685 { 00:07:44.685 "name": "BaseBdev2", 00:07:44.685 "uuid": "98652319-7d08-568a-a20b-f3052640b480", 00:07:44.685 "is_configured": true, 00:07:44.685 "data_offset": 2048, 00:07:44.685 "data_size": 63488 00:07:44.685 } 00:07:44.685 ] 00:07:44.685 }' 00:07:44.685 12:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.685 12:27:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.944 12:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:44.944 12:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.944 12:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.944 [2024-11-19 12:27:50.155718] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:44.944 [2024-11-19 12:27:50.155781] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:44.944 [2024-11-19 12:27:50.158315] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:44.944 [2024-11-19 12:27:50.158405] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:44.944 [2024-11-19 12:27:50.158452] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:44.944 [2024-11-19 12:27:50.158462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:44.944 { 00:07:44.944 "results": [ 00:07:44.944 { 00:07:44.944 "job": "raid_bdev1", 00:07:44.944 "core_mask": "0x1", 00:07:44.944 "workload": "randrw", 00:07:44.944 "percentage": 50, 00:07:44.944 "status": "finished", 00:07:44.944 "queue_depth": 1, 00:07:44.944 "io_size": 131072, 00:07:44.944 "runtime": 1.405107, 00:07:44.944 "iops": 16301.249655720168, 00:07:44.944 "mibps": 2037.656206965021, 00:07:44.944 "io_failed": 1, 00:07:44.944 "io_timeout": 0, 00:07:44.944 "avg_latency_us": 85.1253877151998, 00:07:44.944 "min_latency_us": 25.4882096069869, 00:07:44.944 "max_latency_us": 1416.6078602620087 00:07:44.944 } 00:07:44.944 ], 00:07:44.944 "core_count": 1 00:07:44.944 } 00:07:44.944 12:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.944 12:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73090 00:07:44.944 12:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 73090 ']' 00:07:44.944 12:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 73090 00:07:44.944 12:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:44.944 12:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:44.944 12:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73090 00:07:45.203 12:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:45.203 killing process with pid 73090 00:07:45.203 12:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:45.203 12:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73090' 00:07:45.203 12:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 73090 00:07:45.203 [2024-11-19 12:27:50.205509] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:45.203 12:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 73090 00:07:45.203 [2024-11-19 12:27:50.222265] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:45.462 12:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.HhAW9sYhBI 00:07:45.462 12:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:45.462 12:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:45.462 ************************************ 00:07:45.462 END TEST raid_write_error_test 00:07:45.462 ************************************ 00:07:45.462 12:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:07:45.462 12:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:45.462 12:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:45.462 12:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:45.462 12:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:07:45.462 00:07:45.462 real 0m3.284s 00:07:45.462 user 0m4.060s 00:07:45.462 sys 0m0.610s 00:07:45.462 12:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.462 12:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.462 12:27:50 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:45.462 12:27:50 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:45.462 12:27:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:45.462 12:27:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.462 12:27:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:45.462 ************************************ 00:07:45.462 START TEST raid_state_function_test 00:07:45.462 ************************************ 00:07:45.462 12:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:07:45.462 12:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:45.462 12:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:45.462 12:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:45.462 12:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:45.462 12:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:45.462 12:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:45.462 12:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:45.462 12:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:45.462 12:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:45.463 12:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:45.463 12:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:45.463 12:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:45.463 12:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:45.463 12:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:45.463 12:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:45.463 12:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:45.463 12:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:45.463 12:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:45.463 12:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:45.463 12:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:45.463 12:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:45.463 12:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:45.463 12:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:45.463 12:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73217 00:07:45.463 12:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:45.463 12:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73217' 00:07:45.463 Process raid pid: 73217 00:07:45.463 12:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73217 00:07:45.463 12:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73217 ']' 00:07:45.463 12:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.463 12:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:45.463 12:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.463 12:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:45.463 12:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.463 [2024-11-19 12:27:50.642321] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:45.463 [2024-11-19 12:27:50.642541] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.721 [2024-11-19 12:27:50.790684] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.721 [2024-11-19 12:27:50.837727] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.721 [2024-11-19 12:27:50.881911] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.721 [2024-11-19 12:27:50.881949] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.289 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:46.289 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:46.289 12:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:46.289 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.289 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.289 [2024-11-19 12:27:51.472069] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:46.289 [2024-11-19 12:27:51.472128] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:46.289 [2024-11-19 12:27:51.472141] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:46.289 [2024-11-19 12:27:51.472151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:46.289 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.289 12:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:46.289 12:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.289 12:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.289 12:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:46.289 12:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.289 12:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.289 12:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.289 12:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.289 12:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.289 12:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.289 12:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.289 12:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.289 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.289 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.289 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.289 12:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.289 "name": "Existed_Raid", 00:07:46.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.289 "strip_size_kb": 64, 00:07:46.289 "state": "configuring", 00:07:46.289 "raid_level": "concat", 00:07:46.289 "superblock": false, 00:07:46.289 "num_base_bdevs": 2, 00:07:46.289 "num_base_bdevs_discovered": 0, 00:07:46.289 "num_base_bdevs_operational": 2, 00:07:46.289 "base_bdevs_list": [ 00:07:46.289 { 00:07:46.289 "name": "BaseBdev1", 00:07:46.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.289 "is_configured": false, 00:07:46.289 "data_offset": 0, 00:07:46.289 "data_size": 0 00:07:46.289 }, 00:07:46.289 { 00:07:46.290 "name": "BaseBdev2", 00:07:46.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.290 "is_configured": false, 00:07:46.290 "data_offset": 0, 00:07:46.290 "data_size": 0 00:07:46.290 } 00:07:46.290 ] 00:07:46.290 }' 00:07:46.290 12:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.290 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.875 [2024-11-19 12:27:51.899312] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:46.875 [2024-11-19 12:27:51.899450] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.875 [2024-11-19 12:27:51.911343] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:46.875 [2024-11-19 12:27:51.911469] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:46.875 [2024-11-19 12:27:51.911497] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:46.875 [2024-11-19 12:27:51.911519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.875 [2024-11-19 12:27:51.932525] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:46.875 BaseBdev1 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.875 [ 00:07:46.875 { 00:07:46.875 "name": "BaseBdev1", 00:07:46.875 "aliases": [ 00:07:46.875 "d0a0848b-2718-4e15-bd19-f27d1528c080" 00:07:46.875 ], 00:07:46.875 "product_name": "Malloc disk", 00:07:46.875 "block_size": 512, 00:07:46.875 "num_blocks": 65536, 00:07:46.875 "uuid": "d0a0848b-2718-4e15-bd19-f27d1528c080", 00:07:46.875 "assigned_rate_limits": { 00:07:46.875 "rw_ios_per_sec": 0, 00:07:46.875 "rw_mbytes_per_sec": 0, 00:07:46.875 "r_mbytes_per_sec": 0, 00:07:46.875 "w_mbytes_per_sec": 0 00:07:46.875 }, 00:07:46.875 "claimed": true, 00:07:46.875 "claim_type": "exclusive_write", 00:07:46.875 "zoned": false, 00:07:46.875 "supported_io_types": { 00:07:46.875 "read": true, 00:07:46.875 "write": true, 00:07:46.875 "unmap": true, 00:07:46.875 "flush": true, 00:07:46.875 "reset": true, 00:07:46.875 "nvme_admin": false, 00:07:46.875 "nvme_io": false, 00:07:46.875 "nvme_io_md": false, 00:07:46.875 "write_zeroes": true, 00:07:46.875 "zcopy": true, 00:07:46.875 "get_zone_info": false, 00:07:46.875 "zone_management": false, 00:07:46.875 "zone_append": false, 00:07:46.875 "compare": false, 00:07:46.875 "compare_and_write": false, 00:07:46.875 "abort": true, 00:07:46.875 "seek_hole": false, 00:07:46.875 "seek_data": false, 00:07:46.875 "copy": true, 00:07:46.875 "nvme_iov_md": false 00:07:46.875 }, 00:07:46.875 "memory_domains": [ 00:07:46.875 { 00:07:46.875 "dma_device_id": "system", 00:07:46.875 "dma_device_type": 1 00:07:46.875 }, 00:07:46.875 { 00:07:46.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.875 "dma_device_type": 2 00:07:46.875 } 00:07:46.875 ], 00:07:46.875 "driver_specific": {} 00:07:46.875 } 00:07:46.875 ] 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.875 12:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.875 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.875 "name": "Existed_Raid", 00:07:46.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.875 "strip_size_kb": 64, 00:07:46.876 "state": "configuring", 00:07:46.876 "raid_level": "concat", 00:07:46.876 "superblock": false, 00:07:46.876 "num_base_bdevs": 2, 00:07:46.876 "num_base_bdevs_discovered": 1, 00:07:46.876 "num_base_bdevs_operational": 2, 00:07:46.876 "base_bdevs_list": [ 00:07:46.876 { 00:07:46.876 "name": "BaseBdev1", 00:07:46.876 "uuid": "d0a0848b-2718-4e15-bd19-f27d1528c080", 00:07:46.876 "is_configured": true, 00:07:46.876 "data_offset": 0, 00:07:46.876 "data_size": 65536 00:07:46.876 }, 00:07:46.876 { 00:07:46.876 "name": "BaseBdev2", 00:07:46.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.876 "is_configured": false, 00:07:46.876 "data_offset": 0, 00:07:46.876 "data_size": 0 00:07:46.876 } 00:07:46.876 ] 00:07:46.876 }' 00:07:46.876 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.876 12:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.443 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:47.443 12:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.443 12:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.443 [2024-11-19 12:27:52.423704] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:47.443 [2024-11-19 12:27:52.423824] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:47.443 12:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.443 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:47.443 12:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.443 12:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.443 [2024-11-19 12:27:52.435719] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:47.443 [2024-11-19 12:27:52.437525] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:47.443 [2024-11-19 12:27:52.437566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:47.443 12:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.443 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:47.443 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:47.443 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:47.443 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.443 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:47.443 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:47.443 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.443 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.443 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.443 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.443 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.443 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.443 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.443 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.443 12:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.443 12:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.443 12:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.444 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.444 "name": "Existed_Raid", 00:07:47.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.444 "strip_size_kb": 64, 00:07:47.444 "state": "configuring", 00:07:47.444 "raid_level": "concat", 00:07:47.444 "superblock": false, 00:07:47.444 "num_base_bdevs": 2, 00:07:47.444 "num_base_bdevs_discovered": 1, 00:07:47.444 "num_base_bdevs_operational": 2, 00:07:47.444 "base_bdevs_list": [ 00:07:47.444 { 00:07:47.444 "name": "BaseBdev1", 00:07:47.444 "uuid": "d0a0848b-2718-4e15-bd19-f27d1528c080", 00:07:47.444 "is_configured": true, 00:07:47.444 "data_offset": 0, 00:07:47.444 "data_size": 65536 00:07:47.444 }, 00:07:47.444 { 00:07:47.444 "name": "BaseBdev2", 00:07:47.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.444 "is_configured": false, 00:07:47.444 "data_offset": 0, 00:07:47.444 "data_size": 0 00:07:47.444 } 00:07:47.444 ] 00:07:47.444 }' 00:07:47.444 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.444 12:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.703 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:47.703 12:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.703 12:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.703 [2024-11-19 12:27:52.908025] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:47.703 [2024-11-19 12:27:52.908175] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:47.703 [2024-11-19 12:27:52.908212] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:47.703 [2024-11-19 12:27:52.908577] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:47.704 [2024-11-19 12:27:52.908793] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:47.704 [2024-11-19 12:27:52.908859] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:47.704 [2024-11-19 12:27:52.909138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.704 BaseBdev2 00:07:47.704 12:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.704 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:47.704 12:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:47.704 12:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:47.704 12:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:47.704 12:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:47.704 12:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:47.704 12:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:47.704 12:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.704 12:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.704 12:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.704 12:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:47.704 12:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.704 12:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.704 [ 00:07:47.704 { 00:07:47.704 "name": "BaseBdev2", 00:07:47.704 "aliases": [ 00:07:47.704 "edb2085c-9150-42aa-99dc-50ced1904f4a" 00:07:47.704 ], 00:07:47.704 "product_name": "Malloc disk", 00:07:47.704 "block_size": 512, 00:07:47.704 "num_blocks": 65536, 00:07:47.704 "uuid": "edb2085c-9150-42aa-99dc-50ced1904f4a", 00:07:47.704 "assigned_rate_limits": { 00:07:47.704 "rw_ios_per_sec": 0, 00:07:47.704 "rw_mbytes_per_sec": 0, 00:07:47.704 "r_mbytes_per_sec": 0, 00:07:47.704 "w_mbytes_per_sec": 0 00:07:47.704 }, 00:07:47.704 "claimed": true, 00:07:47.704 "claim_type": "exclusive_write", 00:07:47.704 "zoned": false, 00:07:47.704 "supported_io_types": { 00:07:47.704 "read": true, 00:07:47.704 "write": true, 00:07:47.704 "unmap": true, 00:07:47.704 "flush": true, 00:07:47.704 "reset": true, 00:07:47.704 "nvme_admin": false, 00:07:47.704 "nvme_io": false, 00:07:47.704 "nvme_io_md": false, 00:07:47.704 "write_zeroes": true, 00:07:47.704 "zcopy": true, 00:07:47.704 "get_zone_info": false, 00:07:47.704 "zone_management": false, 00:07:47.704 "zone_append": false, 00:07:47.704 "compare": false, 00:07:47.704 "compare_and_write": false, 00:07:47.704 "abort": true, 00:07:47.704 "seek_hole": false, 00:07:47.704 "seek_data": false, 00:07:47.704 "copy": true, 00:07:47.704 "nvme_iov_md": false 00:07:47.704 }, 00:07:47.704 "memory_domains": [ 00:07:47.704 { 00:07:47.704 "dma_device_id": "system", 00:07:47.704 "dma_device_type": 1 00:07:47.704 }, 00:07:47.704 { 00:07:47.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.704 "dma_device_type": 2 00:07:47.704 } 00:07:47.704 ], 00:07:47.704 "driver_specific": {} 00:07:47.704 } 00:07:47.704 ] 00:07:47.704 12:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.704 12:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:47.704 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:47.704 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:47.704 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:47.704 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.704 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.704 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:47.704 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.704 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.704 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.704 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.704 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.704 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.704 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.704 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.704 12:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.704 12:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.963 12:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.963 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.963 "name": "Existed_Raid", 00:07:47.963 "uuid": "5b8feee5-82b1-4113-b8a0-119025ef6eae", 00:07:47.963 "strip_size_kb": 64, 00:07:47.964 "state": "online", 00:07:47.964 "raid_level": "concat", 00:07:47.964 "superblock": false, 00:07:47.964 "num_base_bdevs": 2, 00:07:47.964 "num_base_bdevs_discovered": 2, 00:07:47.964 "num_base_bdevs_operational": 2, 00:07:47.964 "base_bdevs_list": [ 00:07:47.964 { 00:07:47.964 "name": "BaseBdev1", 00:07:47.964 "uuid": "d0a0848b-2718-4e15-bd19-f27d1528c080", 00:07:47.964 "is_configured": true, 00:07:47.964 "data_offset": 0, 00:07:47.964 "data_size": 65536 00:07:47.964 }, 00:07:47.964 { 00:07:47.964 "name": "BaseBdev2", 00:07:47.964 "uuid": "edb2085c-9150-42aa-99dc-50ced1904f4a", 00:07:47.964 "is_configured": true, 00:07:47.964 "data_offset": 0, 00:07:47.964 "data_size": 65536 00:07:47.964 } 00:07:47.964 ] 00:07:47.964 }' 00:07:47.964 12:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.964 12:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.223 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:48.223 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:48.223 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:48.223 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:48.223 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:48.223 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:48.223 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:48.223 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:48.223 12:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.223 12:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.223 [2024-11-19 12:27:53.343661] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:48.223 12:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.223 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:48.223 "name": "Existed_Raid", 00:07:48.223 "aliases": [ 00:07:48.223 "5b8feee5-82b1-4113-b8a0-119025ef6eae" 00:07:48.223 ], 00:07:48.223 "product_name": "Raid Volume", 00:07:48.223 "block_size": 512, 00:07:48.223 "num_blocks": 131072, 00:07:48.223 "uuid": "5b8feee5-82b1-4113-b8a0-119025ef6eae", 00:07:48.223 "assigned_rate_limits": { 00:07:48.223 "rw_ios_per_sec": 0, 00:07:48.223 "rw_mbytes_per_sec": 0, 00:07:48.223 "r_mbytes_per_sec": 0, 00:07:48.223 "w_mbytes_per_sec": 0 00:07:48.223 }, 00:07:48.223 "claimed": false, 00:07:48.223 "zoned": false, 00:07:48.223 "supported_io_types": { 00:07:48.223 "read": true, 00:07:48.223 "write": true, 00:07:48.223 "unmap": true, 00:07:48.223 "flush": true, 00:07:48.223 "reset": true, 00:07:48.223 "nvme_admin": false, 00:07:48.223 "nvme_io": false, 00:07:48.223 "nvme_io_md": false, 00:07:48.223 "write_zeroes": true, 00:07:48.223 "zcopy": false, 00:07:48.223 "get_zone_info": false, 00:07:48.223 "zone_management": false, 00:07:48.223 "zone_append": false, 00:07:48.223 "compare": false, 00:07:48.223 "compare_and_write": false, 00:07:48.223 "abort": false, 00:07:48.223 "seek_hole": false, 00:07:48.223 "seek_data": false, 00:07:48.223 "copy": false, 00:07:48.223 "nvme_iov_md": false 00:07:48.223 }, 00:07:48.223 "memory_domains": [ 00:07:48.223 { 00:07:48.223 "dma_device_id": "system", 00:07:48.223 "dma_device_type": 1 00:07:48.223 }, 00:07:48.223 { 00:07:48.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.223 "dma_device_type": 2 00:07:48.223 }, 00:07:48.223 { 00:07:48.223 "dma_device_id": "system", 00:07:48.223 "dma_device_type": 1 00:07:48.223 }, 00:07:48.223 { 00:07:48.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.223 "dma_device_type": 2 00:07:48.223 } 00:07:48.223 ], 00:07:48.223 "driver_specific": { 00:07:48.223 "raid": { 00:07:48.223 "uuid": "5b8feee5-82b1-4113-b8a0-119025ef6eae", 00:07:48.223 "strip_size_kb": 64, 00:07:48.223 "state": "online", 00:07:48.223 "raid_level": "concat", 00:07:48.223 "superblock": false, 00:07:48.223 "num_base_bdevs": 2, 00:07:48.223 "num_base_bdevs_discovered": 2, 00:07:48.223 "num_base_bdevs_operational": 2, 00:07:48.223 "base_bdevs_list": [ 00:07:48.223 { 00:07:48.223 "name": "BaseBdev1", 00:07:48.223 "uuid": "d0a0848b-2718-4e15-bd19-f27d1528c080", 00:07:48.223 "is_configured": true, 00:07:48.223 "data_offset": 0, 00:07:48.223 "data_size": 65536 00:07:48.223 }, 00:07:48.223 { 00:07:48.223 "name": "BaseBdev2", 00:07:48.223 "uuid": "edb2085c-9150-42aa-99dc-50ced1904f4a", 00:07:48.223 "is_configured": true, 00:07:48.223 "data_offset": 0, 00:07:48.223 "data_size": 65536 00:07:48.223 } 00:07:48.223 ] 00:07:48.223 } 00:07:48.223 } 00:07:48.223 }' 00:07:48.223 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:48.223 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:48.224 BaseBdev2' 00:07:48.224 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.224 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:48.224 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.224 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:48.224 12:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.224 12:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.224 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.224 12:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.483 [2024-11-19 12:27:53.567007] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:48.483 [2024-11-19 12:27:53.567098] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:48.483 [2024-11-19 12:27:53.567187] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.483 "name": "Existed_Raid", 00:07:48.483 "uuid": "5b8feee5-82b1-4113-b8a0-119025ef6eae", 00:07:48.483 "strip_size_kb": 64, 00:07:48.483 "state": "offline", 00:07:48.483 "raid_level": "concat", 00:07:48.483 "superblock": false, 00:07:48.483 "num_base_bdevs": 2, 00:07:48.483 "num_base_bdevs_discovered": 1, 00:07:48.483 "num_base_bdevs_operational": 1, 00:07:48.483 "base_bdevs_list": [ 00:07:48.483 { 00:07:48.483 "name": null, 00:07:48.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.483 "is_configured": false, 00:07:48.483 "data_offset": 0, 00:07:48.483 "data_size": 65536 00:07:48.483 }, 00:07:48.483 { 00:07:48.483 "name": "BaseBdev2", 00:07:48.483 "uuid": "edb2085c-9150-42aa-99dc-50ced1904f4a", 00:07:48.483 "is_configured": true, 00:07:48.483 "data_offset": 0, 00:07:48.483 "data_size": 65536 00:07:48.483 } 00:07:48.483 ] 00:07:48.483 }' 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.483 12:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.742 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:48.742 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:48.742 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.742 12:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.742 12:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:48.742 12:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.002 12:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.002 12:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:49.002 12:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:49.002 12:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:49.002 12:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.002 12:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.002 [2024-11-19 12:27:54.049967] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:49.002 [2024-11-19 12:27:54.050026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:49.002 12:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.002 12:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:49.002 12:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:49.002 12:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:49.002 12:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.002 12:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.002 12:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.002 12:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.002 12:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:49.002 12:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:49.002 12:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:49.002 12:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73217 00:07:49.002 12:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73217 ']' 00:07:49.002 12:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73217 00:07:49.002 12:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:49.002 12:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:49.002 12:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73217 00:07:49.002 killing process with pid 73217 00:07:49.002 12:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:49.002 12:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:49.002 12:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73217' 00:07:49.002 12:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73217 00:07:49.002 [2024-11-19 12:27:54.142628] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:49.002 12:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73217 00:07:49.002 [2024-11-19 12:27:54.143658] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:49.262 00:07:49.262 real 0m3.847s 00:07:49.262 user 0m6.007s 00:07:49.262 sys 0m0.786s 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.262 ************************************ 00:07:49.262 END TEST raid_state_function_test 00:07:49.262 ************************************ 00:07:49.262 12:27:54 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:49.262 12:27:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:49.262 12:27:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.262 12:27:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:49.262 ************************************ 00:07:49.262 START TEST raid_state_function_test_sb 00:07:49.262 ************************************ 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73459 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73459' 00:07:49.262 Process raid pid: 73459 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73459 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73459 ']' 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:49.262 12:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.521 [2024-11-19 12:27:54.557710] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:49.521 [2024-11-19 12:27:54.557839] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.521 [2024-11-19 12:27:54.719909] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.521 [2024-11-19 12:27:54.769175] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.779 [2024-11-19 12:27:54.812892] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.779 [2024-11-19 12:27:54.812929] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.346 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:50.346 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:50.346 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:50.346 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.346 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.346 [2024-11-19 12:27:55.399239] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:50.347 [2024-11-19 12:27:55.399342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:50.347 [2024-11-19 12:27:55.399385] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.347 [2024-11-19 12:27:55.399409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.347 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.347 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:50.347 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.347 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.347 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:50.347 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.347 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.347 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.347 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.347 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.347 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.347 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.347 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.347 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.347 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.347 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.347 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.347 "name": "Existed_Raid", 00:07:50.347 "uuid": "8d195e2b-eaff-46b6-b761-a78762cfbb7c", 00:07:50.347 "strip_size_kb": 64, 00:07:50.347 "state": "configuring", 00:07:50.347 "raid_level": "concat", 00:07:50.347 "superblock": true, 00:07:50.347 "num_base_bdevs": 2, 00:07:50.347 "num_base_bdevs_discovered": 0, 00:07:50.347 "num_base_bdevs_operational": 2, 00:07:50.347 "base_bdevs_list": [ 00:07:50.347 { 00:07:50.347 "name": "BaseBdev1", 00:07:50.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.347 "is_configured": false, 00:07:50.347 "data_offset": 0, 00:07:50.347 "data_size": 0 00:07:50.347 }, 00:07:50.347 { 00:07:50.347 "name": "BaseBdev2", 00:07:50.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.347 "is_configured": false, 00:07:50.347 "data_offset": 0, 00:07:50.347 "data_size": 0 00:07:50.347 } 00:07:50.347 ] 00:07:50.347 }' 00:07:50.347 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.347 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.606 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:50.606 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.606 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.606 [2024-11-19 12:27:55.802588] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:50.606 [2024-11-19 12:27:55.802687] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:50.606 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.606 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:50.606 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.606 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.606 [2024-11-19 12:27:55.814633] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:50.606 [2024-11-19 12:27:55.814708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:50.606 [2024-11-19 12:27:55.814718] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.606 [2024-11-19 12:27:55.814729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.606 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.606 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:50.606 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.606 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.606 [2024-11-19 12:27:55.835931] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:50.606 BaseBdev1 00:07:50.606 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.606 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:50.606 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:50.606 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:50.606 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:50.606 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:50.606 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:50.606 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:50.606 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.606 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.606 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.606 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:50.606 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.606 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.606 [ 00:07:50.606 { 00:07:50.606 "name": "BaseBdev1", 00:07:50.606 "aliases": [ 00:07:50.606 "749940f8-bdfd-47ef-84ae-825d2d1a00bf" 00:07:50.606 ], 00:07:50.606 "product_name": "Malloc disk", 00:07:50.606 "block_size": 512, 00:07:50.606 "num_blocks": 65536, 00:07:50.606 "uuid": "749940f8-bdfd-47ef-84ae-825d2d1a00bf", 00:07:50.606 "assigned_rate_limits": { 00:07:50.606 "rw_ios_per_sec": 0, 00:07:50.865 "rw_mbytes_per_sec": 0, 00:07:50.865 "r_mbytes_per_sec": 0, 00:07:50.865 "w_mbytes_per_sec": 0 00:07:50.865 }, 00:07:50.865 "claimed": true, 00:07:50.865 "claim_type": "exclusive_write", 00:07:50.865 "zoned": false, 00:07:50.865 "supported_io_types": { 00:07:50.865 "read": true, 00:07:50.865 "write": true, 00:07:50.865 "unmap": true, 00:07:50.865 "flush": true, 00:07:50.865 "reset": true, 00:07:50.865 "nvme_admin": false, 00:07:50.865 "nvme_io": false, 00:07:50.865 "nvme_io_md": false, 00:07:50.865 "write_zeroes": true, 00:07:50.865 "zcopy": true, 00:07:50.865 "get_zone_info": false, 00:07:50.865 "zone_management": false, 00:07:50.865 "zone_append": false, 00:07:50.865 "compare": false, 00:07:50.865 "compare_and_write": false, 00:07:50.865 "abort": true, 00:07:50.865 "seek_hole": false, 00:07:50.865 "seek_data": false, 00:07:50.865 "copy": true, 00:07:50.865 "nvme_iov_md": false 00:07:50.865 }, 00:07:50.865 "memory_domains": [ 00:07:50.865 { 00:07:50.865 "dma_device_id": "system", 00:07:50.865 "dma_device_type": 1 00:07:50.865 }, 00:07:50.865 { 00:07:50.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.865 "dma_device_type": 2 00:07:50.865 } 00:07:50.865 ], 00:07:50.865 "driver_specific": {} 00:07:50.865 } 00:07:50.865 ] 00:07:50.865 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.865 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:50.865 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:50.865 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.865 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.865 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:50.865 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.865 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.865 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.865 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.865 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.865 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.865 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.865 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.865 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.865 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.865 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.865 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.865 "name": "Existed_Raid", 00:07:50.865 "uuid": "343175a2-1014-42a2-b3d1-9ee40e2b626e", 00:07:50.865 "strip_size_kb": 64, 00:07:50.865 "state": "configuring", 00:07:50.865 "raid_level": "concat", 00:07:50.865 "superblock": true, 00:07:50.865 "num_base_bdevs": 2, 00:07:50.865 "num_base_bdevs_discovered": 1, 00:07:50.865 "num_base_bdevs_operational": 2, 00:07:50.865 "base_bdevs_list": [ 00:07:50.865 { 00:07:50.865 "name": "BaseBdev1", 00:07:50.865 "uuid": "749940f8-bdfd-47ef-84ae-825d2d1a00bf", 00:07:50.865 "is_configured": true, 00:07:50.865 "data_offset": 2048, 00:07:50.865 "data_size": 63488 00:07:50.865 }, 00:07:50.865 { 00:07:50.865 "name": "BaseBdev2", 00:07:50.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.865 "is_configured": false, 00:07:50.865 "data_offset": 0, 00:07:50.865 "data_size": 0 00:07:50.865 } 00:07:50.865 ] 00:07:50.865 }' 00:07:50.865 12:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.865 12:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.124 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:51.124 12:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.124 12:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.124 [2024-11-19 12:27:56.347075] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:51.124 [2024-11-19 12:27:56.347173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:51.124 12:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.124 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:51.124 12:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.124 12:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.124 [2024-11-19 12:27:56.359083] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:51.124 [2024-11-19 12:27:56.360912] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.124 [2024-11-19 12:27:56.360984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.124 12:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.124 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:51.124 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:51.124 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:51.124 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.124 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.124 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:51.124 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.124 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.124 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.124 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.124 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.125 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.125 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.125 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.125 12:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.125 12:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.383 12:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.383 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.383 "name": "Existed_Raid", 00:07:51.383 "uuid": "31667474-28ae-49bd-9af5-4815df6e464f", 00:07:51.383 "strip_size_kb": 64, 00:07:51.383 "state": "configuring", 00:07:51.383 "raid_level": "concat", 00:07:51.383 "superblock": true, 00:07:51.383 "num_base_bdevs": 2, 00:07:51.383 "num_base_bdevs_discovered": 1, 00:07:51.383 "num_base_bdevs_operational": 2, 00:07:51.383 "base_bdevs_list": [ 00:07:51.383 { 00:07:51.383 "name": "BaseBdev1", 00:07:51.383 "uuid": "749940f8-bdfd-47ef-84ae-825d2d1a00bf", 00:07:51.383 "is_configured": true, 00:07:51.383 "data_offset": 2048, 00:07:51.383 "data_size": 63488 00:07:51.383 }, 00:07:51.383 { 00:07:51.383 "name": "BaseBdev2", 00:07:51.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.383 "is_configured": false, 00:07:51.383 "data_offset": 0, 00:07:51.383 "data_size": 0 00:07:51.383 } 00:07:51.383 ] 00:07:51.383 }' 00:07:51.383 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.383 12:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.643 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:51.643 12:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.643 12:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.643 [2024-11-19 12:27:56.862232] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:51.643 [2024-11-19 12:27:56.863033] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:51.643 [2024-11-19 12:27:56.863212] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:51.643 BaseBdev2 00:07:51.643 [2024-11-19 12:27:56.864223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:51.643 12:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.643 [2024-11-19 12:27:56.864700] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:51.643 [2024-11-19 12:27:56.864813] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:51.643 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:51.643 12:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:51.643 [2024-11-19 12:27:56.865216] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.643 12:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:51.643 12:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:51.643 12:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:51.643 12:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:51.643 12:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:51.643 12:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.643 12:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.643 12:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.643 12:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:51.643 12:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.643 12:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.643 [ 00:07:51.643 { 00:07:51.643 "name": "BaseBdev2", 00:07:51.643 "aliases": [ 00:07:51.643 "cd77b7c1-8851-49c5-a9df-aaffafaf8b87" 00:07:51.643 ], 00:07:51.643 "product_name": "Malloc disk", 00:07:51.643 "block_size": 512, 00:07:51.643 "num_blocks": 65536, 00:07:51.643 "uuid": "cd77b7c1-8851-49c5-a9df-aaffafaf8b87", 00:07:51.643 "assigned_rate_limits": { 00:07:51.643 "rw_ios_per_sec": 0, 00:07:51.643 "rw_mbytes_per_sec": 0, 00:07:51.643 "r_mbytes_per_sec": 0, 00:07:51.643 "w_mbytes_per_sec": 0 00:07:51.643 }, 00:07:51.643 "claimed": true, 00:07:51.643 "claim_type": "exclusive_write", 00:07:51.643 "zoned": false, 00:07:51.643 "supported_io_types": { 00:07:51.643 "read": true, 00:07:51.643 "write": true, 00:07:51.643 "unmap": true, 00:07:51.643 "flush": true, 00:07:51.643 "reset": true, 00:07:51.643 "nvme_admin": false, 00:07:51.643 "nvme_io": false, 00:07:51.643 "nvme_io_md": false, 00:07:51.643 "write_zeroes": true, 00:07:51.643 "zcopy": true, 00:07:51.643 "get_zone_info": false, 00:07:51.643 "zone_management": false, 00:07:51.643 "zone_append": false, 00:07:51.643 "compare": false, 00:07:51.643 "compare_and_write": false, 00:07:51.643 "abort": true, 00:07:51.643 "seek_hole": false, 00:07:51.643 "seek_data": false, 00:07:51.643 "copy": true, 00:07:51.643 "nvme_iov_md": false 00:07:51.643 }, 00:07:51.643 "memory_domains": [ 00:07:51.643 { 00:07:51.643 "dma_device_id": "system", 00:07:51.643 "dma_device_type": 1 00:07:51.643 }, 00:07:51.643 { 00:07:51.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.903 "dma_device_type": 2 00:07:51.903 } 00:07:51.903 ], 00:07:51.903 "driver_specific": {} 00:07:51.903 } 00:07:51.903 ] 00:07:51.903 12:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.903 12:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:51.903 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:51.903 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:51.903 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:51.903 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.903 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.903 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:51.903 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.903 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.903 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.903 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.903 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.903 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.903 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.903 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.903 12:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.903 12:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.903 12:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.903 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.903 "name": "Existed_Raid", 00:07:51.903 "uuid": "31667474-28ae-49bd-9af5-4815df6e464f", 00:07:51.903 "strip_size_kb": 64, 00:07:51.903 "state": "online", 00:07:51.903 "raid_level": "concat", 00:07:51.903 "superblock": true, 00:07:51.903 "num_base_bdevs": 2, 00:07:51.903 "num_base_bdevs_discovered": 2, 00:07:51.903 "num_base_bdevs_operational": 2, 00:07:51.903 "base_bdevs_list": [ 00:07:51.903 { 00:07:51.903 "name": "BaseBdev1", 00:07:51.903 "uuid": "749940f8-bdfd-47ef-84ae-825d2d1a00bf", 00:07:51.903 "is_configured": true, 00:07:51.903 "data_offset": 2048, 00:07:51.903 "data_size": 63488 00:07:51.903 }, 00:07:51.903 { 00:07:51.903 "name": "BaseBdev2", 00:07:51.903 "uuid": "cd77b7c1-8851-49c5-a9df-aaffafaf8b87", 00:07:51.903 "is_configured": true, 00:07:51.903 "data_offset": 2048, 00:07:51.903 "data_size": 63488 00:07:51.903 } 00:07:51.903 ] 00:07:51.903 }' 00:07:51.903 12:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.903 12:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.162 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:52.162 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:52.162 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:52.162 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:52.162 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:52.162 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:52.162 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:52.162 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:52.162 12:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.162 12:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.162 [2024-11-19 12:27:57.341680] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:52.163 12:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.163 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:52.163 "name": "Existed_Raid", 00:07:52.163 "aliases": [ 00:07:52.163 "31667474-28ae-49bd-9af5-4815df6e464f" 00:07:52.163 ], 00:07:52.163 "product_name": "Raid Volume", 00:07:52.163 "block_size": 512, 00:07:52.163 "num_blocks": 126976, 00:07:52.163 "uuid": "31667474-28ae-49bd-9af5-4815df6e464f", 00:07:52.163 "assigned_rate_limits": { 00:07:52.163 "rw_ios_per_sec": 0, 00:07:52.163 "rw_mbytes_per_sec": 0, 00:07:52.163 "r_mbytes_per_sec": 0, 00:07:52.163 "w_mbytes_per_sec": 0 00:07:52.163 }, 00:07:52.163 "claimed": false, 00:07:52.163 "zoned": false, 00:07:52.163 "supported_io_types": { 00:07:52.163 "read": true, 00:07:52.163 "write": true, 00:07:52.163 "unmap": true, 00:07:52.163 "flush": true, 00:07:52.163 "reset": true, 00:07:52.163 "nvme_admin": false, 00:07:52.163 "nvme_io": false, 00:07:52.163 "nvme_io_md": false, 00:07:52.163 "write_zeroes": true, 00:07:52.163 "zcopy": false, 00:07:52.163 "get_zone_info": false, 00:07:52.163 "zone_management": false, 00:07:52.163 "zone_append": false, 00:07:52.163 "compare": false, 00:07:52.163 "compare_and_write": false, 00:07:52.163 "abort": false, 00:07:52.163 "seek_hole": false, 00:07:52.163 "seek_data": false, 00:07:52.163 "copy": false, 00:07:52.163 "nvme_iov_md": false 00:07:52.163 }, 00:07:52.163 "memory_domains": [ 00:07:52.163 { 00:07:52.163 "dma_device_id": "system", 00:07:52.163 "dma_device_type": 1 00:07:52.163 }, 00:07:52.163 { 00:07:52.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.163 "dma_device_type": 2 00:07:52.163 }, 00:07:52.163 { 00:07:52.163 "dma_device_id": "system", 00:07:52.163 "dma_device_type": 1 00:07:52.163 }, 00:07:52.163 { 00:07:52.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.163 "dma_device_type": 2 00:07:52.163 } 00:07:52.163 ], 00:07:52.163 "driver_specific": { 00:07:52.163 "raid": { 00:07:52.163 "uuid": "31667474-28ae-49bd-9af5-4815df6e464f", 00:07:52.163 "strip_size_kb": 64, 00:07:52.163 "state": "online", 00:07:52.163 "raid_level": "concat", 00:07:52.163 "superblock": true, 00:07:52.163 "num_base_bdevs": 2, 00:07:52.163 "num_base_bdevs_discovered": 2, 00:07:52.163 "num_base_bdevs_operational": 2, 00:07:52.163 "base_bdevs_list": [ 00:07:52.163 { 00:07:52.163 "name": "BaseBdev1", 00:07:52.163 "uuid": "749940f8-bdfd-47ef-84ae-825d2d1a00bf", 00:07:52.163 "is_configured": true, 00:07:52.163 "data_offset": 2048, 00:07:52.163 "data_size": 63488 00:07:52.163 }, 00:07:52.163 { 00:07:52.163 "name": "BaseBdev2", 00:07:52.163 "uuid": "cd77b7c1-8851-49c5-a9df-aaffafaf8b87", 00:07:52.163 "is_configured": true, 00:07:52.163 "data_offset": 2048, 00:07:52.163 "data_size": 63488 00:07:52.163 } 00:07:52.163 ] 00:07:52.163 } 00:07:52.163 } 00:07:52.163 }' 00:07:52.163 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:52.163 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:52.163 BaseBdev2' 00:07:52.163 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.422 [2024-11-19 12:27:57.529114] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:52.422 [2024-11-19 12:27:57.529187] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:52.422 [2024-11-19 12:27:57.529272] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.422 "name": "Existed_Raid", 00:07:52.422 "uuid": "31667474-28ae-49bd-9af5-4815df6e464f", 00:07:52.422 "strip_size_kb": 64, 00:07:52.422 "state": "offline", 00:07:52.422 "raid_level": "concat", 00:07:52.422 "superblock": true, 00:07:52.422 "num_base_bdevs": 2, 00:07:52.422 "num_base_bdevs_discovered": 1, 00:07:52.422 "num_base_bdevs_operational": 1, 00:07:52.422 "base_bdevs_list": [ 00:07:52.422 { 00:07:52.422 "name": null, 00:07:52.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.422 "is_configured": false, 00:07:52.422 "data_offset": 0, 00:07:52.422 "data_size": 63488 00:07:52.422 }, 00:07:52.422 { 00:07:52.422 "name": "BaseBdev2", 00:07:52.422 "uuid": "cd77b7c1-8851-49c5-a9df-aaffafaf8b87", 00:07:52.422 "is_configured": true, 00:07:52.422 "data_offset": 2048, 00:07:52.422 "data_size": 63488 00:07:52.422 } 00:07:52.422 ] 00:07:52.422 }' 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.422 12:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.992 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:52.992 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:52.992 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.992 12:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:52.992 12:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.992 12:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.992 12:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.992 12:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:52.992 12:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:52.992 12:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:52.992 12:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.992 12:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.992 [2024-11-19 12:27:58.019940] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:52.992 [2024-11-19 12:27:58.020058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:52.992 12:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.992 12:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:52.992 12:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:52.992 12:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.993 12:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.993 12:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.993 12:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:52.993 12:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.993 12:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:52.993 12:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:52.993 12:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:52.993 12:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73459 00:07:52.993 12:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73459 ']' 00:07:52.993 12:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 73459 00:07:52.993 12:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:52.993 12:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:52.993 12:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73459 00:07:52.993 12:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:52.993 12:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:52.993 12:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73459' 00:07:52.993 killing process with pid 73459 00:07:52.993 12:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 73459 00:07:52.993 [2024-11-19 12:27:58.129168] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:52.993 12:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 73459 00:07:52.993 [2024-11-19 12:27:58.130199] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:53.253 12:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:53.253 00:07:53.253 real 0m3.916s 00:07:53.253 user 0m6.101s 00:07:53.253 sys 0m0.830s 00:07:53.253 12:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.253 ************************************ 00:07:53.253 END TEST raid_state_function_test_sb 00:07:53.253 ************************************ 00:07:53.253 12:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.253 12:27:58 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:53.253 12:27:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:53.253 12:27:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.253 12:27:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:53.253 ************************************ 00:07:53.253 START TEST raid_superblock_test 00:07:53.253 ************************************ 00:07:53.253 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:07:53.253 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:53.253 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:53.253 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:53.253 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:53.253 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:53.253 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:53.253 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:53.253 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:53.253 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:53.253 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:53.253 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:53.253 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:53.253 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:53.253 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:53.253 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:53.253 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:53.253 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73695 00:07:53.253 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:53.253 12:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73695 00:07:53.253 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 73695 ']' 00:07:53.253 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.253 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:53.253 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.253 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:53.253 12:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.514 [2024-11-19 12:27:58.531452] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:53.514 [2024-11-19 12:27:58.531665] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73695 ] 00:07:53.514 [2024-11-19 12:27:58.693799] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.514 [2024-11-19 12:27:58.740220] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.774 [2024-11-19 12:27:58.783207] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.774 [2024-11-19 12:27:58.783321] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.359 malloc1 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.359 [2024-11-19 12:27:59.373835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:54.359 [2024-11-19 12:27:59.373966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.359 [2024-11-19 12:27:59.374005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:54.359 [2024-11-19 12:27:59.374040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.359 [2024-11-19 12:27:59.376171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.359 [2024-11-19 12:27:59.376212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:54.359 pt1 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.359 malloc2 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.359 [2024-11-19 12:27:59.417713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:54.359 [2024-11-19 12:27:59.417944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.359 [2024-11-19 12:27:59.418027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:54.359 [2024-11-19 12:27:59.418111] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.359 [2024-11-19 12:27:59.423026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.359 [2024-11-19 12:27:59.423176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:54.359 pt2 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.359 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.359 [2024-11-19 12:27:59.431536] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:54.359 [2024-11-19 12:27:59.434553] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:54.359 [2024-11-19 12:27:59.434859] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:54.359 [2024-11-19 12:27:59.434936] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:54.359 [2024-11-19 12:27:59.435364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:54.359 [2024-11-19 12:27:59.435618] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:54.359 [2024-11-19 12:27:59.435689] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:07:54.360 [2024-11-19 12:27:59.436048] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.360 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.360 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:54.360 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:54.360 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:54.360 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:54.360 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.360 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.360 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.360 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.360 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.360 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.360 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.360 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:54.360 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.360 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.360 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.360 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.360 "name": "raid_bdev1", 00:07:54.360 "uuid": "6f133216-d02f-4dce-b320-4bb80a204949", 00:07:54.360 "strip_size_kb": 64, 00:07:54.360 "state": "online", 00:07:54.360 "raid_level": "concat", 00:07:54.360 "superblock": true, 00:07:54.360 "num_base_bdevs": 2, 00:07:54.360 "num_base_bdevs_discovered": 2, 00:07:54.360 "num_base_bdevs_operational": 2, 00:07:54.360 "base_bdevs_list": [ 00:07:54.360 { 00:07:54.360 "name": "pt1", 00:07:54.360 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:54.360 "is_configured": true, 00:07:54.360 "data_offset": 2048, 00:07:54.360 "data_size": 63488 00:07:54.360 }, 00:07:54.360 { 00:07:54.360 "name": "pt2", 00:07:54.360 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:54.360 "is_configured": true, 00:07:54.360 "data_offset": 2048, 00:07:54.360 "data_size": 63488 00:07:54.360 } 00:07:54.360 ] 00:07:54.360 }' 00:07:54.360 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.360 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.620 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:54.620 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:54.620 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:54.620 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:54.620 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:54.620 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:54.620 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:54.620 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:54.620 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.620 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.620 [2024-11-19 12:27:59.867533] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:54.881 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.881 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:54.881 "name": "raid_bdev1", 00:07:54.881 "aliases": [ 00:07:54.881 "6f133216-d02f-4dce-b320-4bb80a204949" 00:07:54.881 ], 00:07:54.881 "product_name": "Raid Volume", 00:07:54.881 "block_size": 512, 00:07:54.881 "num_blocks": 126976, 00:07:54.881 "uuid": "6f133216-d02f-4dce-b320-4bb80a204949", 00:07:54.881 "assigned_rate_limits": { 00:07:54.881 "rw_ios_per_sec": 0, 00:07:54.881 "rw_mbytes_per_sec": 0, 00:07:54.881 "r_mbytes_per_sec": 0, 00:07:54.881 "w_mbytes_per_sec": 0 00:07:54.881 }, 00:07:54.881 "claimed": false, 00:07:54.881 "zoned": false, 00:07:54.881 "supported_io_types": { 00:07:54.881 "read": true, 00:07:54.881 "write": true, 00:07:54.881 "unmap": true, 00:07:54.881 "flush": true, 00:07:54.881 "reset": true, 00:07:54.881 "nvme_admin": false, 00:07:54.881 "nvme_io": false, 00:07:54.881 "nvme_io_md": false, 00:07:54.881 "write_zeroes": true, 00:07:54.881 "zcopy": false, 00:07:54.881 "get_zone_info": false, 00:07:54.881 "zone_management": false, 00:07:54.881 "zone_append": false, 00:07:54.881 "compare": false, 00:07:54.881 "compare_and_write": false, 00:07:54.881 "abort": false, 00:07:54.881 "seek_hole": false, 00:07:54.881 "seek_data": false, 00:07:54.881 "copy": false, 00:07:54.881 "nvme_iov_md": false 00:07:54.881 }, 00:07:54.881 "memory_domains": [ 00:07:54.881 { 00:07:54.881 "dma_device_id": "system", 00:07:54.881 "dma_device_type": 1 00:07:54.881 }, 00:07:54.881 { 00:07:54.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.881 "dma_device_type": 2 00:07:54.881 }, 00:07:54.881 { 00:07:54.881 "dma_device_id": "system", 00:07:54.881 "dma_device_type": 1 00:07:54.881 }, 00:07:54.881 { 00:07:54.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.881 "dma_device_type": 2 00:07:54.881 } 00:07:54.881 ], 00:07:54.881 "driver_specific": { 00:07:54.881 "raid": { 00:07:54.881 "uuid": "6f133216-d02f-4dce-b320-4bb80a204949", 00:07:54.881 "strip_size_kb": 64, 00:07:54.881 "state": "online", 00:07:54.881 "raid_level": "concat", 00:07:54.881 "superblock": true, 00:07:54.881 "num_base_bdevs": 2, 00:07:54.881 "num_base_bdevs_discovered": 2, 00:07:54.881 "num_base_bdevs_operational": 2, 00:07:54.881 "base_bdevs_list": [ 00:07:54.881 { 00:07:54.881 "name": "pt1", 00:07:54.881 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:54.881 "is_configured": true, 00:07:54.881 "data_offset": 2048, 00:07:54.881 "data_size": 63488 00:07:54.881 }, 00:07:54.881 { 00:07:54.881 "name": "pt2", 00:07:54.881 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:54.881 "is_configured": true, 00:07:54.881 "data_offset": 2048, 00:07:54.881 "data_size": 63488 00:07:54.881 } 00:07:54.881 ] 00:07:54.881 } 00:07:54.881 } 00:07:54.881 }' 00:07:54.881 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:54.881 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:54.881 pt2' 00:07:54.881 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.881 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:54.881 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.881 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.881 12:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:54.881 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.881 12:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.881 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.881 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.881 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.881 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.881 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:54.881 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.881 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.881 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.881 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.881 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.881 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.881 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:54.881 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:54.881 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.881 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.881 [2024-11-19 12:28:00.075183] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:54.881 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.881 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6f133216-d02f-4dce-b320-4bb80a204949 00:07:54.881 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6f133216-d02f-4dce-b320-4bb80a204949 ']' 00:07:54.881 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:54.881 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.881 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.881 [2024-11-19 12:28:00.122874] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:54.881 [2024-11-19 12:28:00.122938] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:54.881 [2024-11-19 12:28:00.123042] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:54.881 [2024-11-19 12:28:00.123111] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:54.882 [2024-11-19 12:28:00.123168] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:07:54.882 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.882 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:54.882 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.882 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.882 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.143 [2024-11-19 12:28:00.262703] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:55.143 [2024-11-19 12:28:00.264546] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:55.143 [2024-11-19 12:28:00.264619] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:55.143 [2024-11-19 12:28:00.264664] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:55.143 [2024-11-19 12:28:00.264696] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:55.143 [2024-11-19 12:28:00.264705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:07:55.143 request: 00:07:55.143 { 00:07:55.143 "name": "raid_bdev1", 00:07:55.143 "raid_level": "concat", 00:07:55.143 "base_bdevs": [ 00:07:55.143 "malloc1", 00:07:55.143 "malloc2" 00:07:55.143 ], 00:07:55.143 "strip_size_kb": 64, 00:07:55.143 "superblock": false, 00:07:55.143 "method": "bdev_raid_create", 00:07:55.143 "req_id": 1 00:07:55.143 } 00:07:55.143 Got JSON-RPC error response 00:07:55.143 response: 00:07:55.143 { 00:07:55.143 "code": -17, 00:07:55.143 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:55.143 } 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.143 [2024-11-19 12:28:00.330539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:55.143 [2024-11-19 12:28:00.330620] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.143 [2024-11-19 12:28:00.330660] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:55.143 [2024-11-19 12:28:00.330686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.143 [2024-11-19 12:28:00.332773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.143 [2024-11-19 12:28:00.332837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:55.143 [2024-11-19 12:28:00.332924] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:55.143 [2024-11-19 12:28:00.332993] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:55.143 pt1 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.143 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.144 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:55.144 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.144 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.144 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.144 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.144 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.144 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.144 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.144 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.144 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.144 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.144 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.144 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.144 "name": "raid_bdev1", 00:07:55.144 "uuid": "6f133216-d02f-4dce-b320-4bb80a204949", 00:07:55.144 "strip_size_kb": 64, 00:07:55.144 "state": "configuring", 00:07:55.144 "raid_level": "concat", 00:07:55.144 "superblock": true, 00:07:55.144 "num_base_bdevs": 2, 00:07:55.144 "num_base_bdevs_discovered": 1, 00:07:55.144 "num_base_bdevs_operational": 2, 00:07:55.144 "base_bdevs_list": [ 00:07:55.144 { 00:07:55.144 "name": "pt1", 00:07:55.144 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.144 "is_configured": true, 00:07:55.144 "data_offset": 2048, 00:07:55.144 "data_size": 63488 00:07:55.144 }, 00:07:55.144 { 00:07:55.144 "name": null, 00:07:55.144 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.144 "is_configured": false, 00:07:55.144 "data_offset": 2048, 00:07:55.144 "data_size": 63488 00:07:55.144 } 00:07:55.144 ] 00:07:55.144 }' 00:07:55.144 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.144 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.715 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:55.715 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:55.715 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:55.715 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:55.715 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.715 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.715 [2024-11-19 12:28:00.781817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:55.715 [2024-11-19 12:28:00.781886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.715 [2024-11-19 12:28:00.781911] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:55.715 [2024-11-19 12:28:00.781920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.715 [2024-11-19 12:28:00.782349] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.715 [2024-11-19 12:28:00.782380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:55.715 [2024-11-19 12:28:00.782461] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:55.715 [2024-11-19 12:28:00.782482] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:55.715 [2024-11-19 12:28:00.782583] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:55.715 [2024-11-19 12:28:00.782598] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:55.715 [2024-11-19 12:28:00.782852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:55.715 [2024-11-19 12:28:00.783021] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:55.715 [2024-11-19 12:28:00.783042] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:55.715 [2024-11-19 12:28:00.783149] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.715 pt2 00:07:55.715 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.715 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:55.715 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:55.715 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:55.715 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.715 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:55.715 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:55.715 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.715 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.715 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.715 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.715 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.715 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.715 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.715 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.715 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.715 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.715 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.715 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.715 "name": "raid_bdev1", 00:07:55.715 "uuid": "6f133216-d02f-4dce-b320-4bb80a204949", 00:07:55.715 "strip_size_kb": 64, 00:07:55.715 "state": "online", 00:07:55.715 "raid_level": "concat", 00:07:55.715 "superblock": true, 00:07:55.715 "num_base_bdevs": 2, 00:07:55.715 "num_base_bdevs_discovered": 2, 00:07:55.715 "num_base_bdevs_operational": 2, 00:07:55.715 "base_bdevs_list": [ 00:07:55.716 { 00:07:55.716 "name": "pt1", 00:07:55.716 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.716 "is_configured": true, 00:07:55.716 "data_offset": 2048, 00:07:55.716 "data_size": 63488 00:07:55.716 }, 00:07:55.716 { 00:07:55.716 "name": "pt2", 00:07:55.716 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.716 "is_configured": true, 00:07:55.716 "data_offset": 2048, 00:07:55.716 "data_size": 63488 00:07:55.716 } 00:07:55.716 ] 00:07:55.716 }' 00:07:55.716 12:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.716 12:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.976 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:55.976 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:55.976 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:55.976 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:55.976 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:55.976 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:55.976 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:55.976 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:55.976 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.976 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.976 [2024-11-19 12:28:01.209339] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:55.976 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.236 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:56.236 "name": "raid_bdev1", 00:07:56.236 "aliases": [ 00:07:56.236 "6f133216-d02f-4dce-b320-4bb80a204949" 00:07:56.236 ], 00:07:56.236 "product_name": "Raid Volume", 00:07:56.236 "block_size": 512, 00:07:56.236 "num_blocks": 126976, 00:07:56.236 "uuid": "6f133216-d02f-4dce-b320-4bb80a204949", 00:07:56.236 "assigned_rate_limits": { 00:07:56.236 "rw_ios_per_sec": 0, 00:07:56.236 "rw_mbytes_per_sec": 0, 00:07:56.236 "r_mbytes_per_sec": 0, 00:07:56.236 "w_mbytes_per_sec": 0 00:07:56.236 }, 00:07:56.236 "claimed": false, 00:07:56.236 "zoned": false, 00:07:56.236 "supported_io_types": { 00:07:56.236 "read": true, 00:07:56.236 "write": true, 00:07:56.236 "unmap": true, 00:07:56.236 "flush": true, 00:07:56.236 "reset": true, 00:07:56.236 "nvme_admin": false, 00:07:56.236 "nvme_io": false, 00:07:56.236 "nvme_io_md": false, 00:07:56.236 "write_zeroes": true, 00:07:56.236 "zcopy": false, 00:07:56.236 "get_zone_info": false, 00:07:56.236 "zone_management": false, 00:07:56.236 "zone_append": false, 00:07:56.236 "compare": false, 00:07:56.236 "compare_and_write": false, 00:07:56.236 "abort": false, 00:07:56.236 "seek_hole": false, 00:07:56.236 "seek_data": false, 00:07:56.236 "copy": false, 00:07:56.236 "nvme_iov_md": false 00:07:56.236 }, 00:07:56.236 "memory_domains": [ 00:07:56.236 { 00:07:56.236 "dma_device_id": "system", 00:07:56.236 "dma_device_type": 1 00:07:56.236 }, 00:07:56.236 { 00:07:56.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.236 "dma_device_type": 2 00:07:56.236 }, 00:07:56.236 { 00:07:56.236 "dma_device_id": "system", 00:07:56.236 "dma_device_type": 1 00:07:56.236 }, 00:07:56.236 { 00:07:56.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.236 "dma_device_type": 2 00:07:56.236 } 00:07:56.236 ], 00:07:56.236 "driver_specific": { 00:07:56.236 "raid": { 00:07:56.236 "uuid": "6f133216-d02f-4dce-b320-4bb80a204949", 00:07:56.236 "strip_size_kb": 64, 00:07:56.236 "state": "online", 00:07:56.236 "raid_level": "concat", 00:07:56.236 "superblock": true, 00:07:56.236 "num_base_bdevs": 2, 00:07:56.236 "num_base_bdevs_discovered": 2, 00:07:56.236 "num_base_bdevs_operational": 2, 00:07:56.236 "base_bdevs_list": [ 00:07:56.236 { 00:07:56.236 "name": "pt1", 00:07:56.236 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:56.236 "is_configured": true, 00:07:56.236 "data_offset": 2048, 00:07:56.236 "data_size": 63488 00:07:56.236 }, 00:07:56.236 { 00:07:56.236 "name": "pt2", 00:07:56.236 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.236 "is_configured": true, 00:07:56.236 "data_offset": 2048, 00:07:56.236 "data_size": 63488 00:07:56.236 } 00:07:56.236 ] 00:07:56.236 } 00:07:56.236 } 00:07:56.236 }' 00:07:56.236 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:56.236 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:56.236 pt2' 00:07:56.236 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.236 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:56.236 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.236 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:56.236 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.236 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.236 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.236 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.236 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.236 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.236 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.236 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:56.236 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.236 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.236 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.236 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.236 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.236 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.236 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:56.236 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:56.237 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.237 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.237 [2024-11-19 12:28:01.460851] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.237 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.498 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6f133216-d02f-4dce-b320-4bb80a204949 '!=' 6f133216-d02f-4dce-b320-4bb80a204949 ']' 00:07:56.498 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:56.498 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:56.498 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:56.498 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73695 00:07:56.498 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 73695 ']' 00:07:56.498 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 73695 00:07:56.498 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:56.498 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:56.498 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73695 00:07:56.498 killing process with pid 73695 00:07:56.498 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:56.498 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:56.498 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73695' 00:07:56.498 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 73695 00:07:56.498 [2024-11-19 12:28:01.543914] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:56.498 [2024-11-19 12:28:01.543996] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.498 [2024-11-19 12:28:01.544047] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.498 [2024-11-19 12:28:01.544055] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:56.498 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 73695 00:07:56.498 [2024-11-19 12:28:01.566619] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:56.758 12:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:56.758 00:07:56.758 real 0m3.370s 00:07:56.758 user 0m5.166s 00:07:56.758 sys 0m0.737s 00:07:56.758 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.758 12:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.758 ************************************ 00:07:56.758 END TEST raid_superblock_test 00:07:56.758 ************************************ 00:07:56.758 12:28:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:56.759 12:28:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:56.759 12:28:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.759 12:28:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:56.759 ************************************ 00:07:56.759 START TEST raid_read_error_test 00:07:56.759 ************************************ 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.AbOZLiLJlY 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73895 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73895 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 73895 ']' 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:56.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:56.759 12:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.759 [2024-11-19 12:28:01.984275] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:56.759 [2024-11-19 12:28:01.984411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73895 ] 00:07:57.018 [2024-11-19 12:28:02.142054] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.018 [2024-11-19 12:28:02.191599] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.019 [2024-11-19 12:28:02.234930] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.019 [2024-11-19 12:28:02.234974] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.588 12:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:57.588 12:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:57.588 12:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:57.588 12:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:57.588 12:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.588 12:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.588 BaseBdev1_malloc 00:07:57.588 12:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.588 12:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:57.588 12:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.588 12:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.588 true 00:07:57.588 12:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.588 12:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:57.588 12:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.588 12:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.588 [2024-11-19 12:28:02.838143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:57.588 [2024-11-19 12:28:02.838259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.588 [2024-11-19 12:28:02.838292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:57.588 [2024-11-19 12:28:02.838301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.588 [2024-11-19 12:28:02.840488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.588 [2024-11-19 12:28:02.840525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:57.588 BaseBdev1 00:07:57.588 12:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.588 12:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:57.588 12:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:57.588 12:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.588 12:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.849 BaseBdev2_malloc 00:07:57.849 12:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.849 12:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:57.849 12:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.849 12:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.849 true 00:07:57.849 12:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.849 12:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:57.849 12:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.849 12:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.849 [2024-11-19 12:28:02.889645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:57.849 [2024-11-19 12:28:02.889739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.849 [2024-11-19 12:28:02.889774] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:57.849 [2024-11-19 12:28:02.889783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.849 [2024-11-19 12:28:02.891843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.849 [2024-11-19 12:28:02.891878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:57.849 BaseBdev2 00:07:57.849 12:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.849 12:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:57.849 12:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.849 12:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.849 [2024-11-19 12:28:02.901668] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:57.849 [2024-11-19 12:28:02.903568] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:57.849 [2024-11-19 12:28:02.903787] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:57.849 [2024-11-19 12:28:02.903837] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:57.849 [2024-11-19 12:28:02.904108] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:57.849 [2024-11-19 12:28:02.904281] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:57.849 [2024-11-19 12:28:02.904324] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:57.849 [2024-11-19 12:28:02.904495] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:57.849 12:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.849 12:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:57.849 12:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:57.849 12:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:57.849 12:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:57.849 12:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.849 12:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.849 12:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.849 12:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.849 12:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.849 12:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.849 12:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.849 12:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:57.849 12:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.849 12:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.849 12:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.849 12:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.849 "name": "raid_bdev1", 00:07:57.849 "uuid": "df0eea34-d91b-4819-afce-25cf11330a3f", 00:07:57.849 "strip_size_kb": 64, 00:07:57.849 "state": "online", 00:07:57.849 "raid_level": "concat", 00:07:57.849 "superblock": true, 00:07:57.849 "num_base_bdevs": 2, 00:07:57.849 "num_base_bdevs_discovered": 2, 00:07:57.849 "num_base_bdevs_operational": 2, 00:07:57.849 "base_bdevs_list": [ 00:07:57.849 { 00:07:57.849 "name": "BaseBdev1", 00:07:57.849 "uuid": "c4f995af-8840-5487-84f4-7cc2e1fcc328", 00:07:57.849 "is_configured": true, 00:07:57.849 "data_offset": 2048, 00:07:57.849 "data_size": 63488 00:07:57.849 }, 00:07:57.849 { 00:07:57.849 "name": "BaseBdev2", 00:07:57.849 "uuid": "85c4bb3e-39d2-509d-8d33-2038bfdaf556", 00:07:57.849 "is_configured": true, 00:07:57.849 "data_offset": 2048, 00:07:57.850 "data_size": 63488 00:07:57.850 } 00:07:57.850 ] 00:07:57.850 }' 00:07:57.850 12:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.850 12:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.109 12:28:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:58.109 12:28:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:58.368 [2024-11-19 12:28:03.433122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:59.319 12:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:59.319 12:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.319 12:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.319 12:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.319 12:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:59.319 12:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:59.319 12:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:59.319 12:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:59.319 12:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:59.319 12:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:59.319 12:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:59.319 12:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.319 12:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.319 12:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.319 12:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.319 12:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.319 12:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.319 12:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.319 12:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:59.319 12:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.319 12:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.319 12:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.319 12:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.319 "name": "raid_bdev1", 00:07:59.319 "uuid": "df0eea34-d91b-4819-afce-25cf11330a3f", 00:07:59.319 "strip_size_kb": 64, 00:07:59.319 "state": "online", 00:07:59.319 "raid_level": "concat", 00:07:59.319 "superblock": true, 00:07:59.319 "num_base_bdevs": 2, 00:07:59.319 "num_base_bdevs_discovered": 2, 00:07:59.319 "num_base_bdevs_operational": 2, 00:07:59.319 "base_bdevs_list": [ 00:07:59.319 { 00:07:59.319 "name": "BaseBdev1", 00:07:59.319 "uuid": "c4f995af-8840-5487-84f4-7cc2e1fcc328", 00:07:59.319 "is_configured": true, 00:07:59.319 "data_offset": 2048, 00:07:59.319 "data_size": 63488 00:07:59.319 }, 00:07:59.319 { 00:07:59.319 "name": "BaseBdev2", 00:07:59.319 "uuid": "85c4bb3e-39d2-509d-8d33-2038bfdaf556", 00:07:59.319 "is_configured": true, 00:07:59.320 "data_offset": 2048, 00:07:59.320 "data_size": 63488 00:07:59.320 } 00:07:59.320 ] 00:07:59.320 }' 00:07:59.320 12:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.320 12:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.591 12:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:59.591 12:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.591 12:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.591 [2024-11-19 12:28:04.800813] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:59.591 [2024-11-19 12:28:04.800895] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:59.591 [2024-11-19 12:28:04.803493] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.591 [2024-11-19 12:28:04.803580] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:59.591 [2024-11-19 12:28:04.803635] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:59.591 [2024-11-19 12:28:04.803677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:59.591 { 00:07:59.591 "results": [ 00:07:59.591 { 00:07:59.591 "job": "raid_bdev1", 00:07:59.591 "core_mask": "0x1", 00:07:59.591 "workload": "randrw", 00:07:59.591 "percentage": 50, 00:07:59.591 "status": "finished", 00:07:59.591 "queue_depth": 1, 00:07:59.591 "io_size": 131072, 00:07:59.591 "runtime": 1.368654, 00:07:59.591 "iops": 17166.500810285143, 00:07:59.591 "mibps": 2145.812601285643, 00:07:59.591 "io_failed": 1, 00:07:59.591 "io_timeout": 0, 00:07:59.591 "avg_latency_us": 80.67348763628632, 00:07:59.591 "min_latency_us": 25.2646288209607, 00:07:59.591 "max_latency_us": 1423.7624454148472 00:07:59.591 } 00:07:59.591 ], 00:07:59.591 "core_count": 1 00:07:59.591 } 00:07:59.591 12:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.591 12:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73895 00:07:59.591 12:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 73895 ']' 00:07:59.591 12:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 73895 00:07:59.591 12:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:59.591 12:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:59.591 12:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73895 00:07:59.850 killing process with pid 73895 00:07:59.850 12:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:59.850 12:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:59.850 12:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73895' 00:07:59.850 12:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 73895 00:07:59.850 [2024-11-19 12:28:04.851964] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:59.850 12:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 73895 00:07:59.850 [2024-11-19 12:28:04.867173] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:59.850 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:59.850 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.AbOZLiLJlY 00:07:59.850 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:00.109 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:00.109 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:00.109 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:00.109 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:00.109 12:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:00.109 00:08:00.109 real 0m3.235s 00:08:00.109 user 0m4.085s 00:08:00.109 sys 0m0.525s 00:08:00.109 ************************************ 00:08:00.109 END TEST raid_read_error_test 00:08:00.109 ************************************ 00:08:00.109 12:28:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.109 12:28:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.109 12:28:05 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:00.109 12:28:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:00.109 12:28:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.109 12:28:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:00.109 ************************************ 00:08:00.109 START TEST raid_write_error_test 00:08:00.109 ************************************ 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.M5nydPEgKy 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74024 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74024 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 74024 ']' 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.109 12:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.109 [2024-11-19 12:28:05.291641] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:00.109 [2024-11-19 12:28:05.291790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74024 ] 00:08:00.368 [2024-11-19 12:28:05.450586] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.368 [2024-11-19 12:28:05.499740] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.368 [2024-11-19 12:28:05.543557] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.368 [2024-11-19 12:28:05.543614] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.935 12:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:00.935 12:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:00.935 12:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:00.935 12:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:00.935 12:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.935 12:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.935 BaseBdev1_malloc 00:08:00.935 12:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.935 12:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:00.935 12:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.935 12:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.935 true 00:08:00.935 12:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.935 12:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:00.935 12:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.935 12:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.935 [2024-11-19 12:28:06.178883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:00.935 [2024-11-19 12:28:06.178991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:00.935 [2024-11-19 12:28:06.179050] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:00.935 [2024-11-19 12:28:06.179098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:00.935 [2024-11-19 12:28:06.181275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:00.935 [2024-11-19 12:28:06.181349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:00.935 BaseBdev1 00:08:00.935 12:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.935 12:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:00.935 12:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:00.935 12:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.935 12:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.194 BaseBdev2_malloc 00:08:01.194 12:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.194 12:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:01.194 12:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.194 12:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.194 true 00:08:01.194 12:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.194 12:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:01.194 12:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.194 12:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.194 [2024-11-19 12:28:06.230347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:01.194 [2024-11-19 12:28:06.230399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:01.194 [2024-11-19 12:28:06.230417] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:01.194 [2024-11-19 12:28:06.230425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:01.194 [2024-11-19 12:28:06.232518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:01.194 [2024-11-19 12:28:06.232555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:01.194 BaseBdev2 00:08:01.194 12:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.194 12:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:01.194 12:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.194 12:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.194 [2024-11-19 12:28:06.242360] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:01.194 [2024-11-19 12:28:06.244234] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:01.194 [2024-11-19 12:28:06.244460] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:01.194 [2024-11-19 12:28:06.244509] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:01.194 [2024-11-19 12:28:06.244783] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:01.194 [2024-11-19 12:28:06.244942] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:01.194 [2024-11-19 12:28:06.244989] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:01.194 [2024-11-19 12:28:06.245148] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.194 12:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.194 12:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:01.194 12:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:01.194 12:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:01.194 12:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:01.194 12:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.194 12:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:01.194 12:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.194 12:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.194 12:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.194 12:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.194 12:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.194 12:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:01.194 12:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.194 12:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.194 12:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.194 12:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.194 "name": "raid_bdev1", 00:08:01.194 "uuid": "4f12b83d-0a5b-4d02-bde0-f9dcef5b18c0", 00:08:01.194 "strip_size_kb": 64, 00:08:01.194 "state": "online", 00:08:01.194 "raid_level": "concat", 00:08:01.194 "superblock": true, 00:08:01.194 "num_base_bdevs": 2, 00:08:01.194 "num_base_bdevs_discovered": 2, 00:08:01.194 "num_base_bdevs_operational": 2, 00:08:01.194 "base_bdevs_list": [ 00:08:01.194 { 00:08:01.194 "name": "BaseBdev1", 00:08:01.194 "uuid": "51d1bc97-9512-554e-9816-832dd2996923", 00:08:01.194 "is_configured": true, 00:08:01.194 "data_offset": 2048, 00:08:01.194 "data_size": 63488 00:08:01.194 }, 00:08:01.194 { 00:08:01.194 "name": "BaseBdev2", 00:08:01.194 "uuid": "ef893ee3-ded3-527c-ac19-8ce2b96497e7", 00:08:01.194 "is_configured": true, 00:08:01.194 "data_offset": 2048, 00:08:01.194 "data_size": 63488 00:08:01.194 } 00:08:01.194 ] 00:08:01.194 }' 00:08:01.194 12:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.194 12:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.454 12:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:01.454 12:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:01.713 [2024-11-19 12:28:06.741942] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:02.649 12:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:02.649 12:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.649 12:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.649 12:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.649 12:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:02.649 12:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:02.649 12:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:02.649 12:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:02.649 12:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:02.649 12:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.649 12:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:02.649 12:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.649 12:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.649 12:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.649 12:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.649 12:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.649 12:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.649 12:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.649 12:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.649 12:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.649 12:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.649 12:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.649 12:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.649 "name": "raid_bdev1", 00:08:02.649 "uuid": "4f12b83d-0a5b-4d02-bde0-f9dcef5b18c0", 00:08:02.649 "strip_size_kb": 64, 00:08:02.649 "state": "online", 00:08:02.649 "raid_level": "concat", 00:08:02.649 "superblock": true, 00:08:02.649 "num_base_bdevs": 2, 00:08:02.649 "num_base_bdevs_discovered": 2, 00:08:02.649 "num_base_bdevs_operational": 2, 00:08:02.649 "base_bdevs_list": [ 00:08:02.649 { 00:08:02.649 "name": "BaseBdev1", 00:08:02.649 "uuid": "51d1bc97-9512-554e-9816-832dd2996923", 00:08:02.649 "is_configured": true, 00:08:02.649 "data_offset": 2048, 00:08:02.649 "data_size": 63488 00:08:02.649 }, 00:08:02.649 { 00:08:02.649 "name": "BaseBdev2", 00:08:02.649 "uuid": "ef893ee3-ded3-527c-ac19-8ce2b96497e7", 00:08:02.649 "is_configured": true, 00:08:02.649 "data_offset": 2048, 00:08:02.649 "data_size": 63488 00:08:02.649 } 00:08:02.649 ] 00:08:02.649 }' 00:08:02.649 12:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.649 12:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.908 12:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:02.908 12:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.908 12:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.908 [2024-11-19 12:28:08.121562] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:02.908 [2024-11-19 12:28:08.121648] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:02.908 [2024-11-19 12:28:08.124194] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:02.908 [2024-11-19 12:28:08.124269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.908 [2024-11-19 12:28:08.124320] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:02.908 [2024-11-19 12:28:08.124361] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:02.908 { 00:08:02.908 "results": [ 00:08:02.908 { 00:08:02.908 "job": "raid_bdev1", 00:08:02.908 "core_mask": "0x1", 00:08:02.908 "workload": "randrw", 00:08:02.908 "percentage": 50, 00:08:02.908 "status": "finished", 00:08:02.908 "queue_depth": 1, 00:08:02.908 "io_size": 131072, 00:08:02.908 "runtime": 1.380427, 00:08:02.908 "iops": 17065.009594857245, 00:08:02.908 "mibps": 2133.1261993571557, 00:08:02.908 "io_failed": 1, 00:08:02.908 "io_timeout": 0, 00:08:02.908 "avg_latency_us": 81.11833456847748, 00:08:02.909 "min_latency_us": 24.929257641921396, 00:08:02.909 "max_latency_us": 1373.6803493449781 00:08:02.909 } 00:08:02.909 ], 00:08:02.909 "core_count": 1 00:08:02.909 } 00:08:02.909 12:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.909 12:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74024 00:08:02.909 12:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 74024 ']' 00:08:02.909 12:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 74024 00:08:02.909 12:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:02.909 12:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:02.909 12:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74024 00:08:03.167 12:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:03.167 12:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:03.167 killing process with pid 74024 00:08:03.167 12:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74024' 00:08:03.167 12:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 74024 00:08:03.167 [2024-11-19 12:28:08.172702] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:03.167 12:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 74024 00:08:03.167 [2024-11-19 12:28:08.188719] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:03.427 12:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.M5nydPEgKy 00:08:03.427 12:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:03.427 12:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:03.427 12:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:03.427 12:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:03.427 12:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:03.427 12:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:03.427 12:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:03.427 00:08:03.427 real 0m3.253s 00:08:03.427 user 0m4.107s 00:08:03.427 sys 0m0.530s 00:08:03.427 12:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.427 12:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.427 ************************************ 00:08:03.427 END TEST raid_write_error_test 00:08:03.427 ************************************ 00:08:03.427 12:28:08 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:03.427 12:28:08 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:03.427 12:28:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:03.427 12:28:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.427 12:28:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:03.427 ************************************ 00:08:03.427 START TEST raid_state_function_test 00:08:03.427 ************************************ 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74151 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74151' 00:08:03.427 Process raid pid: 74151 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74151 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 74151 ']' 00:08:03.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:03.427 12:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.427 [2024-11-19 12:28:08.615901] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:03.427 [2024-11-19 12:28:08.616157] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.686 [2024-11-19 12:28:08.782353] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.686 [2024-11-19 12:28:08.830577] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.686 [2024-11-19 12:28:08.874852] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:03.686 [2024-11-19 12:28:08.874890] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.252 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:04.252 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:04.252 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:04.252 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.252 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.252 [2024-11-19 12:28:09.473197] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:04.252 [2024-11-19 12:28:09.473258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:04.252 [2024-11-19 12:28:09.473295] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:04.252 [2024-11-19 12:28:09.473307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:04.252 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.252 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:04.252 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.252 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.252 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:04.252 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:04.252 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.252 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.252 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.252 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.252 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.252 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.252 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.252 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.252 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.252 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.511 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.511 "name": "Existed_Raid", 00:08:04.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.511 "strip_size_kb": 0, 00:08:04.511 "state": "configuring", 00:08:04.511 "raid_level": "raid1", 00:08:04.511 "superblock": false, 00:08:04.511 "num_base_bdevs": 2, 00:08:04.511 "num_base_bdevs_discovered": 0, 00:08:04.511 "num_base_bdevs_operational": 2, 00:08:04.511 "base_bdevs_list": [ 00:08:04.511 { 00:08:04.511 "name": "BaseBdev1", 00:08:04.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.511 "is_configured": false, 00:08:04.511 "data_offset": 0, 00:08:04.511 "data_size": 0 00:08:04.511 }, 00:08:04.511 { 00:08:04.511 "name": "BaseBdev2", 00:08:04.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.511 "is_configured": false, 00:08:04.511 "data_offset": 0, 00:08:04.511 "data_size": 0 00:08:04.511 } 00:08:04.511 ] 00:08:04.511 }' 00:08:04.511 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.511 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.770 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:04.770 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.770 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.770 [2024-11-19 12:28:09.876435] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:04.770 [2024-11-19 12:28:09.876581] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:04.770 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.770 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:04.770 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.770 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.770 [2024-11-19 12:28:09.888426] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:04.770 [2024-11-19 12:28:09.888538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:04.770 [2024-11-19 12:28:09.888566] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:04.770 [2024-11-19 12:28:09.888588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:04.770 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.770 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:04.770 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.770 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.770 [2024-11-19 12:28:09.909831] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:04.770 BaseBdev1 00:08:04.770 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.770 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:04.770 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:04.770 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:04.770 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:04.770 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:04.770 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:04.770 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:04.770 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.770 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.770 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.770 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:04.770 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.770 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.770 [ 00:08:04.770 { 00:08:04.770 "name": "BaseBdev1", 00:08:04.770 "aliases": [ 00:08:04.770 "d9a9ee13-47c7-43a6-8ac9-fc0fde8c624f" 00:08:04.770 ], 00:08:04.770 "product_name": "Malloc disk", 00:08:04.770 "block_size": 512, 00:08:04.770 "num_blocks": 65536, 00:08:04.770 "uuid": "d9a9ee13-47c7-43a6-8ac9-fc0fde8c624f", 00:08:04.770 "assigned_rate_limits": { 00:08:04.770 "rw_ios_per_sec": 0, 00:08:04.770 "rw_mbytes_per_sec": 0, 00:08:04.770 "r_mbytes_per_sec": 0, 00:08:04.770 "w_mbytes_per_sec": 0 00:08:04.770 }, 00:08:04.770 "claimed": true, 00:08:04.770 "claim_type": "exclusive_write", 00:08:04.770 "zoned": false, 00:08:04.770 "supported_io_types": { 00:08:04.770 "read": true, 00:08:04.770 "write": true, 00:08:04.770 "unmap": true, 00:08:04.770 "flush": true, 00:08:04.770 "reset": true, 00:08:04.770 "nvme_admin": false, 00:08:04.770 "nvme_io": false, 00:08:04.770 "nvme_io_md": false, 00:08:04.770 "write_zeroes": true, 00:08:04.770 "zcopy": true, 00:08:04.770 "get_zone_info": false, 00:08:04.770 "zone_management": false, 00:08:04.770 "zone_append": false, 00:08:04.770 "compare": false, 00:08:04.770 "compare_and_write": false, 00:08:04.770 "abort": true, 00:08:04.770 "seek_hole": false, 00:08:04.770 "seek_data": false, 00:08:04.770 "copy": true, 00:08:04.770 "nvme_iov_md": false 00:08:04.770 }, 00:08:04.770 "memory_domains": [ 00:08:04.770 { 00:08:04.770 "dma_device_id": "system", 00:08:04.770 "dma_device_type": 1 00:08:04.770 }, 00:08:04.770 { 00:08:04.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.770 "dma_device_type": 2 00:08:04.771 } 00:08:04.771 ], 00:08:04.771 "driver_specific": {} 00:08:04.771 } 00:08:04.771 ] 00:08:04.771 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.771 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:04.771 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:04.771 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.771 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.771 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:04.771 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:04.771 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.771 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.771 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.771 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.771 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.771 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.771 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.771 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.771 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.771 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.771 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.771 "name": "Existed_Raid", 00:08:04.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.771 "strip_size_kb": 0, 00:08:04.771 "state": "configuring", 00:08:04.771 "raid_level": "raid1", 00:08:04.771 "superblock": false, 00:08:04.771 "num_base_bdevs": 2, 00:08:04.771 "num_base_bdevs_discovered": 1, 00:08:04.771 "num_base_bdevs_operational": 2, 00:08:04.771 "base_bdevs_list": [ 00:08:04.771 { 00:08:04.771 "name": "BaseBdev1", 00:08:04.771 "uuid": "d9a9ee13-47c7-43a6-8ac9-fc0fde8c624f", 00:08:04.771 "is_configured": true, 00:08:04.771 "data_offset": 0, 00:08:04.771 "data_size": 65536 00:08:04.771 }, 00:08:04.771 { 00:08:04.771 "name": "BaseBdev2", 00:08:04.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.771 "is_configured": false, 00:08:04.771 "data_offset": 0, 00:08:04.771 "data_size": 0 00:08:04.771 } 00:08:04.771 ] 00:08:04.771 }' 00:08:04.771 12:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.771 12:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.354 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:05.354 12:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.354 12:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.354 [2024-11-19 12:28:10.397042] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:05.354 [2024-11-19 12:28:10.397183] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:05.354 12:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.354 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:05.354 12:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.354 12:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.354 [2024-11-19 12:28:10.409017] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:05.354 [2024-11-19 12:28:10.411018] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:05.354 [2024-11-19 12:28:10.411097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:05.354 12:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.354 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:05.354 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:05.354 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:05.354 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.354 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.354 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:05.354 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:05.354 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:05.354 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.354 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.354 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.354 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.354 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.354 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.354 12:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.354 12:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.354 12:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.354 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.354 "name": "Existed_Raid", 00:08:05.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.354 "strip_size_kb": 0, 00:08:05.354 "state": "configuring", 00:08:05.354 "raid_level": "raid1", 00:08:05.354 "superblock": false, 00:08:05.354 "num_base_bdevs": 2, 00:08:05.354 "num_base_bdevs_discovered": 1, 00:08:05.354 "num_base_bdevs_operational": 2, 00:08:05.354 "base_bdevs_list": [ 00:08:05.354 { 00:08:05.354 "name": "BaseBdev1", 00:08:05.354 "uuid": "d9a9ee13-47c7-43a6-8ac9-fc0fde8c624f", 00:08:05.354 "is_configured": true, 00:08:05.354 "data_offset": 0, 00:08:05.354 "data_size": 65536 00:08:05.354 }, 00:08:05.354 { 00:08:05.354 "name": "BaseBdev2", 00:08:05.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.354 "is_configured": false, 00:08:05.354 "data_offset": 0, 00:08:05.354 "data_size": 0 00:08:05.354 } 00:08:05.354 ] 00:08:05.354 }' 00:08:05.354 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.354 12:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.614 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:05.614 12:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.614 12:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.614 [2024-11-19 12:28:10.869492] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:05.614 [2024-11-19 12:28:10.869660] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:05.614 [2024-11-19 12:28:10.869697] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:05.614 [2024-11-19 12:28:10.870090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:05.614 [2024-11-19 12:28:10.870327] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:05.614 [2024-11-19 12:28:10.870386] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:05.615 [2024-11-19 12:28:10.870711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:05.615 BaseBdev2 00:08:05.615 12:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.615 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.875 [ 00:08:05.875 { 00:08:05.875 "name": "BaseBdev2", 00:08:05.875 "aliases": [ 00:08:05.875 "42af724c-da50-4339-ac9f-ab8bb9997195" 00:08:05.875 ], 00:08:05.875 "product_name": "Malloc disk", 00:08:05.875 "block_size": 512, 00:08:05.875 "num_blocks": 65536, 00:08:05.875 "uuid": "42af724c-da50-4339-ac9f-ab8bb9997195", 00:08:05.875 "assigned_rate_limits": { 00:08:05.875 "rw_ios_per_sec": 0, 00:08:05.875 "rw_mbytes_per_sec": 0, 00:08:05.875 "r_mbytes_per_sec": 0, 00:08:05.875 "w_mbytes_per_sec": 0 00:08:05.875 }, 00:08:05.875 "claimed": true, 00:08:05.875 "claim_type": "exclusive_write", 00:08:05.875 "zoned": false, 00:08:05.875 "supported_io_types": { 00:08:05.875 "read": true, 00:08:05.875 "write": true, 00:08:05.875 "unmap": true, 00:08:05.875 "flush": true, 00:08:05.875 "reset": true, 00:08:05.875 "nvme_admin": false, 00:08:05.875 "nvme_io": false, 00:08:05.875 "nvme_io_md": false, 00:08:05.875 "write_zeroes": true, 00:08:05.875 "zcopy": true, 00:08:05.875 "get_zone_info": false, 00:08:05.875 "zone_management": false, 00:08:05.875 "zone_append": false, 00:08:05.875 "compare": false, 00:08:05.875 "compare_and_write": false, 00:08:05.875 "abort": true, 00:08:05.875 "seek_hole": false, 00:08:05.875 "seek_data": false, 00:08:05.875 "copy": true, 00:08:05.875 "nvme_iov_md": false 00:08:05.875 }, 00:08:05.875 "memory_domains": [ 00:08:05.875 { 00:08:05.875 "dma_device_id": "system", 00:08:05.875 "dma_device_type": 1 00:08:05.875 }, 00:08:05.875 { 00:08:05.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.875 "dma_device_type": 2 00:08:05.875 } 00:08:05.875 ], 00:08:05.875 "driver_specific": {} 00:08:05.875 } 00:08:05.875 ] 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.875 "name": "Existed_Raid", 00:08:05.875 "uuid": "e86bf8fa-ca04-4f63-92d3-7d182e88c5b5", 00:08:05.875 "strip_size_kb": 0, 00:08:05.875 "state": "online", 00:08:05.875 "raid_level": "raid1", 00:08:05.875 "superblock": false, 00:08:05.875 "num_base_bdevs": 2, 00:08:05.875 "num_base_bdevs_discovered": 2, 00:08:05.875 "num_base_bdevs_operational": 2, 00:08:05.875 "base_bdevs_list": [ 00:08:05.875 { 00:08:05.875 "name": "BaseBdev1", 00:08:05.875 "uuid": "d9a9ee13-47c7-43a6-8ac9-fc0fde8c624f", 00:08:05.875 "is_configured": true, 00:08:05.875 "data_offset": 0, 00:08:05.875 "data_size": 65536 00:08:05.875 }, 00:08:05.875 { 00:08:05.875 "name": "BaseBdev2", 00:08:05.875 "uuid": "42af724c-da50-4339-ac9f-ab8bb9997195", 00:08:05.875 "is_configured": true, 00:08:05.875 "data_offset": 0, 00:08:05.875 "data_size": 65536 00:08:05.875 } 00:08:05.875 ] 00:08:05.875 }' 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.875 12:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.135 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:06.135 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:06.135 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:06.135 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:06.135 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:06.135 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:06.135 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:06.135 12:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.135 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:06.135 12:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.135 [2024-11-19 12:28:11.273145] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:06.135 12:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.135 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:06.135 "name": "Existed_Raid", 00:08:06.135 "aliases": [ 00:08:06.135 "e86bf8fa-ca04-4f63-92d3-7d182e88c5b5" 00:08:06.135 ], 00:08:06.135 "product_name": "Raid Volume", 00:08:06.135 "block_size": 512, 00:08:06.135 "num_blocks": 65536, 00:08:06.135 "uuid": "e86bf8fa-ca04-4f63-92d3-7d182e88c5b5", 00:08:06.136 "assigned_rate_limits": { 00:08:06.136 "rw_ios_per_sec": 0, 00:08:06.136 "rw_mbytes_per_sec": 0, 00:08:06.136 "r_mbytes_per_sec": 0, 00:08:06.136 "w_mbytes_per_sec": 0 00:08:06.136 }, 00:08:06.136 "claimed": false, 00:08:06.136 "zoned": false, 00:08:06.136 "supported_io_types": { 00:08:06.136 "read": true, 00:08:06.136 "write": true, 00:08:06.136 "unmap": false, 00:08:06.136 "flush": false, 00:08:06.136 "reset": true, 00:08:06.136 "nvme_admin": false, 00:08:06.136 "nvme_io": false, 00:08:06.136 "nvme_io_md": false, 00:08:06.136 "write_zeroes": true, 00:08:06.136 "zcopy": false, 00:08:06.136 "get_zone_info": false, 00:08:06.136 "zone_management": false, 00:08:06.136 "zone_append": false, 00:08:06.136 "compare": false, 00:08:06.136 "compare_and_write": false, 00:08:06.136 "abort": false, 00:08:06.136 "seek_hole": false, 00:08:06.136 "seek_data": false, 00:08:06.136 "copy": false, 00:08:06.136 "nvme_iov_md": false 00:08:06.136 }, 00:08:06.136 "memory_domains": [ 00:08:06.136 { 00:08:06.136 "dma_device_id": "system", 00:08:06.136 "dma_device_type": 1 00:08:06.136 }, 00:08:06.136 { 00:08:06.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.136 "dma_device_type": 2 00:08:06.136 }, 00:08:06.136 { 00:08:06.136 "dma_device_id": "system", 00:08:06.136 "dma_device_type": 1 00:08:06.136 }, 00:08:06.136 { 00:08:06.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.136 "dma_device_type": 2 00:08:06.136 } 00:08:06.136 ], 00:08:06.136 "driver_specific": { 00:08:06.136 "raid": { 00:08:06.136 "uuid": "e86bf8fa-ca04-4f63-92d3-7d182e88c5b5", 00:08:06.136 "strip_size_kb": 0, 00:08:06.136 "state": "online", 00:08:06.136 "raid_level": "raid1", 00:08:06.136 "superblock": false, 00:08:06.136 "num_base_bdevs": 2, 00:08:06.136 "num_base_bdevs_discovered": 2, 00:08:06.136 "num_base_bdevs_operational": 2, 00:08:06.136 "base_bdevs_list": [ 00:08:06.136 { 00:08:06.136 "name": "BaseBdev1", 00:08:06.136 "uuid": "d9a9ee13-47c7-43a6-8ac9-fc0fde8c624f", 00:08:06.136 "is_configured": true, 00:08:06.136 "data_offset": 0, 00:08:06.136 "data_size": 65536 00:08:06.136 }, 00:08:06.136 { 00:08:06.136 "name": "BaseBdev2", 00:08:06.136 "uuid": "42af724c-da50-4339-ac9f-ab8bb9997195", 00:08:06.136 "is_configured": true, 00:08:06.136 "data_offset": 0, 00:08:06.136 "data_size": 65536 00:08:06.136 } 00:08:06.136 ] 00:08:06.136 } 00:08:06.136 } 00:08:06.136 }' 00:08:06.136 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:06.136 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:06.136 BaseBdev2' 00:08:06.136 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.396 [2024-11-19 12:28:11.508533] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.396 "name": "Existed_Raid", 00:08:06.396 "uuid": "e86bf8fa-ca04-4f63-92d3-7d182e88c5b5", 00:08:06.396 "strip_size_kb": 0, 00:08:06.396 "state": "online", 00:08:06.396 "raid_level": "raid1", 00:08:06.396 "superblock": false, 00:08:06.396 "num_base_bdevs": 2, 00:08:06.396 "num_base_bdevs_discovered": 1, 00:08:06.396 "num_base_bdevs_operational": 1, 00:08:06.396 "base_bdevs_list": [ 00:08:06.396 { 00:08:06.396 "name": null, 00:08:06.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.396 "is_configured": false, 00:08:06.396 "data_offset": 0, 00:08:06.396 "data_size": 65536 00:08:06.396 }, 00:08:06.396 { 00:08:06.396 "name": "BaseBdev2", 00:08:06.396 "uuid": "42af724c-da50-4339-ac9f-ab8bb9997195", 00:08:06.396 "is_configured": true, 00:08:06.396 "data_offset": 0, 00:08:06.396 "data_size": 65536 00:08:06.396 } 00:08:06.396 ] 00:08:06.396 }' 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.396 12:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.967 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:06.967 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:06.967 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:06.967 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.967 12:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.967 12:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.967 12:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.967 12:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:06.967 12:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:06.967 12:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:06.967 12:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.967 12:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.967 [2024-11-19 12:28:12.007506] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:06.967 [2024-11-19 12:28:12.007610] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:06.967 [2024-11-19 12:28:12.019827] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:06.967 [2024-11-19 12:28:12.019951] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:06.967 [2024-11-19 12:28:12.019998] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:06.967 12:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.967 12:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:06.967 12:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:06.967 12:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:06.967 12:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.967 12:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.967 12:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.967 12:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.967 12:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:06.967 12:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:06.967 12:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:06.967 12:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74151 00:08:06.967 12:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 74151 ']' 00:08:06.967 12:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 74151 00:08:06.967 12:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:06.967 12:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:06.967 12:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74151 00:08:06.967 killing process with pid 74151 00:08:06.967 12:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:06.967 12:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:06.967 12:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74151' 00:08:06.967 12:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 74151 00:08:06.967 [2024-11-19 12:28:12.106278] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:06.967 12:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 74151 00:08:06.967 [2024-11-19 12:28:12.107319] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:07.227 ************************************ 00:08:07.227 END TEST raid_state_function_test 00:08:07.227 ************************************ 00:08:07.227 12:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:07.227 00:08:07.227 real 0m3.842s 00:08:07.227 user 0m5.969s 00:08:07.227 sys 0m0.818s 00:08:07.227 12:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.227 12:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.227 12:28:12 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:07.227 12:28:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:07.227 12:28:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.227 12:28:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:07.227 ************************************ 00:08:07.227 START TEST raid_state_function_test_sb 00:08:07.227 ************************************ 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74388 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74388' 00:08:07.228 Process raid pid: 74388 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74388 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 74388 ']' 00:08:07.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:07.228 12:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.488 [2024-11-19 12:28:12.529975] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:07.488 [2024-11-19 12:28:12.530123] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.488 [2024-11-19 12:28:12.695369] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.488 [2024-11-19 12:28:12.744743] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.748 [2024-11-19 12:28:12.789333] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.748 [2024-11-19 12:28:12.789373] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.318 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:08.318 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:08.318 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:08.318 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.318 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.318 [2024-11-19 12:28:13.367885] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:08.318 [2024-11-19 12:28:13.367956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:08.318 [2024-11-19 12:28:13.367969] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.318 [2024-11-19 12:28:13.367979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.318 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.318 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:08.318 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.318 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.318 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:08.318 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:08.318 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.318 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.318 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.318 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.318 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.318 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.318 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.318 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.318 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.318 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.318 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.318 "name": "Existed_Raid", 00:08:08.318 "uuid": "1cba38a0-469d-4a18-a536-3e05ac6aa175", 00:08:08.318 "strip_size_kb": 0, 00:08:08.318 "state": "configuring", 00:08:08.318 "raid_level": "raid1", 00:08:08.318 "superblock": true, 00:08:08.318 "num_base_bdevs": 2, 00:08:08.318 "num_base_bdevs_discovered": 0, 00:08:08.318 "num_base_bdevs_operational": 2, 00:08:08.318 "base_bdevs_list": [ 00:08:08.318 { 00:08:08.318 "name": "BaseBdev1", 00:08:08.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.318 "is_configured": false, 00:08:08.318 "data_offset": 0, 00:08:08.318 "data_size": 0 00:08:08.318 }, 00:08:08.318 { 00:08:08.318 "name": "BaseBdev2", 00:08:08.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.318 "is_configured": false, 00:08:08.318 "data_offset": 0, 00:08:08.319 "data_size": 0 00:08:08.319 } 00:08:08.319 ] 00:08:08.319 }' 00:08:08.319 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.319 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.579 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:08.579 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.579 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.579 [2024-11-19 12:28:13.787038] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:08.579 [2024-11-19 12:28:13.787149] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:08.579 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.579 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:08.579 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.579 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.579 [2024-11-19 12:28:13.799038] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:08.579 [2024-11-19 12:28:13.799082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:08.579 [2024-11-19 12:28:13.799091] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.579 [2024-11-19 12:28:13.799101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.579 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.579 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:08.579 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.579 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.579 [2024-11-19 12:28:13.820368] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:08.579 BaseBdev1 00:08:08.579 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.579 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:08.579 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:08.579 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:08.579 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:08.579 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:08.579 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:08.579 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:08.579 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.579 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.579 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.579 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:08.579 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.579 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.953 [ 00:08:08.953 { 00:08:08.953 "name": "BaseBdev1", 00:08:08.953 "aliases": [ 00:08:08.953 "4a41b076-d657-47ff-8a14-5236c342d84c" 00:08:08.953 ], 00:08:08.953 "product_name": "Malloc disk", 00:08:08.953 "block_size": 512, 00:08:08.953 "num_blocks": 65536, 00:08:08.953 "uuid": "4a41b076-d657-47ff-8a14-5236c342d84c", 00:08:08.953 "assigned_rate_limits": { 00:08:08.953 "rw_ios_per_sec": 0, 00:08:08.953 "rw_mbytes_per_sec": 0, 00:08:08.953 "r_mbytes_per_sec": 0, 00:08:08.953 "w_mbytes_per_sec": 0 00:08:08.953 }, 00:08:08.953 "claimed": true, 00:08:08.953 "claim_type": "exclusive_write", 00:08:08.953 "zoned": false, 00:08:08.953 "supported_io_types": { 00:08:08.953 "read": true, 00:08:08.953 "write": true, 00:08:08.953 "unmap": true, 00:08:08.953 "flush": true, 00:08:08.953 "reset": true, 00:08:08.953 "nvme_admin": false, 00:08:08.953 "nvme_io": false, 00:08:08.953 "nvme_io_md": false, 00:08:08.953 "write_zeroes": true, 00:08:08.953 "zcopy": true, 00:08:08.953 "get_zone_info": false, 00:08:08.953 "zone_management": false, 00:08:08.953 "zone_append": false, 00:08:08.953 "compare": false, 00:08:08.953 "compare_and_write": false, 00:08:08.953 "abort": true, 00:08:08.953 "seek_hole": false, 00:08:08.953 "seek_data": false, 00:08:08.953 "copy": true, 00:08:08.953 "nvme_iov_md": false 00:08:08.953 }, 00:08:08.953 "memory_domains": [ 00:08:08.953 { 00:08:08.953 "dma_device_id": "system", 00:08:08.953 "dma_device_type": 1 00:08:08.953 }, 00:08:08.953 { 00:08:08.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.953 "dma_device_type": 2 00:08:08.953 } 00:08:08.953 ], 00:08:08.953 "driver_specific": {} 00:08:08.953 } 00:08:08.953 ] 00:08:08.953 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.953 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:08.953 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:08.953 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.953 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.953 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:08.953 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:08.953 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.953 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.953 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.953 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.953 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.953 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.953 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.953 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.953 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.953 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.953 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.953 "name": "Existed_Raid", 00:08:08.953 "uuid": "8889dd8e-f28a-44e3-b3e1-a9320a6894e6", 00:08:08.953 "strip_size_kb": 0, 00:08:08.953 "state": "configuring", 00:08:08.953 "raid_level": "raid1", 00:08:08.953 "superblock": true, 00:08:08.953 "num_base_bdevs": 2, 00:08:08.953 "num_base_bdevs_discovered": 1, 00:08:08.953 "num_base_bdevs_operational": 2, 00:08:08.953 "base_bdevs_list": [ 00:08:08.953 { 00:08:08.953 "name": "BaseBdev1", 00:08:08.953 "uuid": "4a41b076-d657-47ff-8a14-5236c342d84c", 00:08:08.953 "is_configured": true, 00:08:08.953 "data_offset": 2048, 00:08:08.953 "data_size": 63488 00:08:08.953 }, 00:08:08.953 { 00:08:08.953 "name": "BaseBdev2", 00:08:08.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.953 "is_configured": false, 00:08:08.953 "data_offset": 0, 00:08:08.953 "data_size": 0 00:08:08.953 } 00:08:08.953 ] 00:08:08.953 }' 00:08:08.953 12:28:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.953 12:28:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.214 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:09.214 12:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.214 12:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.214 [2024-11-19 12:28:14.275851] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:09.214 [2024-11-19 12:28:14.275993] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:09.214 12:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.214 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:09.214 12:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.214 12:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.214 [2024-11-19 12:28:14.287845] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:09.214 [2024-11-19 12:28:14.289671] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:09.214 [2024-11-19 12:28:14.289719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:09.214 12:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.214 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:09.214 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:09.214 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:09.214 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.214 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.214 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:09.214 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:09.214 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.214 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.214 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.214 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.214 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.214 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.214 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.214 12:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.214 12:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.214 12:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.214 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.214 "name": "Existed_Raid", 00:08:09.214 "uuid": "b15dc067-85bb-450f-8ef2-b9760a3db3fa", 00:08:09.214 "strip_size_kb": 0, 00:08:09.214 "state": "configuring", 00:08:09.214 "raid_level": "raid1", 00:08:09.214 "superblock": true, 00:08:09.214 "num_base_bdevs": 2, 00:08:09.214 "num_base_bdevs_discovered": 1, 00:08:09.214 "num_base_bdevs_operational": 2, 00:08:09.214 "base_bdevs_list": [ 00:08:09.214 { 00:08:09.214 "name": "BaseBdev1", 00:08:09.214 "uuid": "4a41b076-d657-47ff-8a14-5236c342d84c", 00:08:09.214 "is_configured": true, 00:08:09.214 "data_offset": 2048, 00:08:09.214 "data_size": 63488 00:08:09.214 }, 00:08:09.214 { 00:08:09.214 "name": "BaseBdev2", 00:08:09.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.214 "is_configured": false, 00:08:09.214 "data_offset": 0, 00:08:09.214 "data_size": 0 00:08:09.214 } 00:08:09.214 ] 00:08:09.214 }' 00:08:09.214 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.214 12:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.785 [2024-11-19 12:28:14.778479] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:09.785 [2024-11-19 12:28:14.778846] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:09.785 [2024-11-19 12:28:14.778914] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:09.785 [2024-11-19 12:28:14.779283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:09.785 BaseBdev2 00:08:09.785 [2024-11-19 12:28:14.779498] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:09.785 [2024-11-19 12:28:14.779577] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:09.785 [2024-11-19 12:28:14.779824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.785 [ 00:08:09.785 { 00:08:09.785 "name": "BaseBdev2", 00:08:09.785 "aliases": [ 00:08:09.785 "5167c33b-7286-4d25-973c-99856344a0e7" 00:08:09.785 ], 00:08:09.785 "product_name": "Malloc disk", 00:08:09.785 "block_size": 512, 00:08:09.785 "num_blocks": 65536, 00:08:09.785 "uuid": "5167c33b-7286-4d25-973c-99856344a0e7", 00:08:09.785 "assigned_rate_limits": { 00:08:09.785 "rw_ios_per_sec": 0, 00:08:09.785 "rw_mbytes_per_sec": 0, 00:08:09.785 "r_mbytes_per_sec": 0, 00:08:09.785 "w_mbytes_per_sec": 0 00:08:09.785 }, 00:08:09.785 "claimed": true, 00:08:09.785 "claim_type": "exclusive_write", 00:08:09.785 "zoned": false, 00:08:09.785 "supported_io_types": { 00:08:09.785 "read": true, 00:08:09.785 "write": true, 00:08:09.785 "unmap": true, 00:08:09.785 "flush": true, 00:08:09.785 "reset": true, 00:08:09.785 "nvme_admin": false, 00:08:09.785 "nvme_io": false, 00:08:09.785 "nvme_io_md": false, 00:08:09.785 "write_zeroes": true, 00:08:09.785 "zcopy": true, 00:08:09.785 "get_zone_info": false, 00:08:09.785 "zone_management": false, 00:08:09.785 "zone_append": false, 00:08:09.785 "compare": false, 00:08:09.785 "compare_and_write": false, 00:08:09.785 "abort": true, 00:08:09.785 "seek_hole": false, 00:08:09.785 "seek_data": false, 00:08:09.785 "copy": true, 00:08:09.785 "nvme_iov_md": false 00:08:09.785 }, 00:08:09.785 "memory_domains": [ 00:08:09.785 { 00:08:09.785 "dma_device_id": "system", 00:08:09.785 "dma_device_type": 1 00:08:09.785 }, 00:08:09.785 { 00:08:09.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.785 "dma_device_type": 2 00:08:09.785 } 00:08:09.785 ], 00:08:09.785 "driver_specific": {} 00:08:09.785 } 00:08:09.785 ] 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.785 "name": "Existed_Raid", 00:08:09.785 "uuid": "b15dc067-85bb-450f-8ef2-b9760a3db3fa", 00:08:09.785 "strip_size_kb": 0, 00:08:09.785 "state": "online", 00:08:09.785 "raid_level": "raid1", 00:08:09.785 "superblock": true, 00:08:09.785 "num_base_bdevs": 2, 00:08:09.785 "num_base_bdevs_discovered": 2, 00:08:09.785 "num_base_bdevs_operational": 2, 00:08:09.785 "base_bdevs_list": [ 00:08:09.785 { 00:08:09.785 "name": "BaseBdev1", 00:08:09.785 "uuid": "4a41b076-d657-47ff-8a14-5236c342d84c", 00:08:09.785 "is_configured": true, 00:08:09.785 "data_offset": 2048, 00:08:09.785 "data_size": 63488 00:08:09.785 }, 00:08:09.785 { 00:08:09.785 "name": "BaseBdev2", 00:08:09.785 "uuid": "5167c33b-7286-4d25-973c-99856344a0e7", 00:08:09.785 "is_configured": true, 00:08:09.785 "data_offset": 2048, 00:08:09.785 "data_size": 63488 00:08:09.785 } 00:08:09.785 ] 00:08:09.785 }' 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.785 12:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.045 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:10.045 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:10.045 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:10.045 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:10.045 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:10.045 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:10.045 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:10.045 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:10.045 12:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.045 12:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.045 [2024-11-19 12:28:15.229983] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:10.045 12:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.045 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:10.045 "name": "Existed_Raid", 00:08:10.045 "aliases": [ 00:08:10.045 "b15dc067-85bb-450f-8ef2-b9760a3db3fa" 00:08:10.045 ], 00:08:10.045 "product_name": "Raid Volume", 00:08:10.045 "block_size": 512, 00:08:10.045 "num_blocks": 63488, 00:08:10.045 "uuid": "b15dc067-85bb-450f-8ef2-b9760a3db3fa", 00:08:10.045 "assigned_rate_limits": { 00:08:10.045 "rw_ios_per_sec": 0, 00:08:10.045 "rw_mbytes_per_sec": 0, 00:08:10.045 "r_mbytes_per_sec": 0, 00:08:10.045 "w_mbytes_per_sec": 0 00:08:10.045 }, 00:08:10.045 "claimed": false, 00:08:10.045 "zoned": false, 00:08:10.045 "supported_io_types": { 00:08:10.045 "read": true, 00:08:10.045 "write": true, 00:08:10.045 "unmap": false, 00:08:10.045 "flush": false, 00:08:10.045 "reset": true, 00:08:10.045 "nvme_admin": false, 00:08:10.045 "nvme_io": false, 00:08:10.045 "nvme_io_md": false, 00:08:10.045 "write_zeroes": true, 00:08:10.045 "zcopy": false, 00:08:10.045 "get_zone_info": false, 00:08:10.045 "zone_management": false, 00:08:10.045 "zone_append": false, 00:08:10.045 "compare": false, 00:08:10.045 "compare_and_write": false, 00:08:10.045 "abort": false, 00:08:10.045 "seek_hole": false, 00:08:10.045 "seek_data": false, 00:08:10.045 "copy": false, 00:08:10.046 "nvme_iov_md": false 00:08:10.046 }, 00:08:10.046 "memory_domains": [ 00:08:10.046 { 00:08:10.046 "dma_device_id": "system", 00:08:10.046 "dma_device_type": 1 00:08:10.046 }, 00:08:10.046 { 00:08:10.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.046 "dma_device_type": 2 00:08:10.046 }, 00:08:10.046 { 00:08:10.046 "dma_device_id": "system", 00:08:10.046 "dma_device_type": 1 00:08:10.046 }, 00:08:10.046 { 00:08:10.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.046 "dma_device_type": 2 00:08:10.046 } 00:08:10.046 ], 00:08:10.046 "driver_specific": { 00:08:10.046 "raid": { 00:08:10.046 "uuid": "b15dc067-85bb-450f-8ef2-b9760a3db3fa", 00:08:10.046 "strip_size_kb": 0, 00:08:10.046 "state": "online", 00:08:10.046 "raid_level": "raid1", 00:08:10.046 "superblock": true, 00:08:10.046 "num_base_bdevs": 2, 00:08:10.046 "num_base_bdevs_discovered": 2, 00:08:10.046 "num_base_bdevs_operational": 2, 00:08:10.046 "base_bdevs_list": [ 00:08:10.046 { 00:08:10.046 "name": "BaseBdev1", 00:08:10.046 "uuid": "4a41b076-d657-47ff-8a14-5236c342d84c", 00:08:10.046 "is_configured": true, 00:08:10.046 "data_offset": 2048, 00:08:10.046 "data_size": 63488 00:08:10.046 }, 00:08:10.046 { 00:08:10.046 "name": "BaseBdev2", 00:08:10.046 "uuid": "5167c33b-7286-4d25-973c-99856344a0e7", 00:08:10.046 "is_configured": true, 00:08:10.046 "data_offset": 2048, 00:08:10.046 "data_size": 63488 00:08:10.046 } 00:08:10.046 ] 00:08:10.046 } 00:08:10.046 } 00:08:10.046 }' 00:08:10.046 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:10.306 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:10.306 BaseBdev2' 00:08:10.306 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.306 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:10.306 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:10.306 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:10.306 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.306 12:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.306 12:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.306 12:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.306 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:10.306 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:10.306 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:10.306 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:10.306 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.306 12:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.306 12:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.306 12:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.306 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:10.306 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:10.306 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:10.306 12:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.306 12:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.306 [2024-11-19 12:28:15.441381] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:10.306 12:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.306 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:10.306 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:10.306 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:10.307 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:10.307 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:10.307 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:10.307 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.307 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:10.307 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:10.307 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:10.307 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:10.307 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.307 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.307 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.307 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.307 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.307 12:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.307 12:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.307 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.307 12:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.307 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.307 "name": "Existed_Raid", 00:08:10.307 "uuid": "b15dc067-85bb-450f-8ef2-b9760a3db3fa", 00:08:10.307 "strip_size_kb": 0, 00:08:10.307 "state": "online", 00:08:10.307 "raid_level": "raid1", 00:08:10.307 "superblock": true, 00:08:10.307 "num_base_bdevs": 2, 00:08:10.307 "num_base_bdevs_discovered": 1, 00:08:10.307 "num_base_bdevs_operational": 1, 00:08:10.307 "base_bdevs_list": [ 00:08:10.307 { 00:08:10.307 "name": null, 00:08:10.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.307 "is_configured": false, 00:08:10.307 "data_offset": 0, 00:08:10.307 "data_size": 63488 00:08:10.307 }, 00:08:10.307 { 00:08:10.307 "name": "BaseBdev2", 00:08:10.307 "uuid": "5167c33b-7286-4d25-973c-99856344a0e7", 00:08:10.307 "is_configured": true, 00:08:10.307 "data_offset": 2048, 00:08:10.307 "data_size": 63488 00:08:10.307 } 00:08:10.307 ] 00:08:10.307 }' 00:08:10.307 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.307 12:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.877 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:10.877 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:10.877 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.877 12:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.877 12:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.877 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:10.877 12:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.877 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:10.877 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:10.877 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:10.877 12:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.877 12:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.877 [2024-11-19 12:28:15.916192] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:10.877 [2024-11-19 12:28:15.916298] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:10.877 [2024-11-19 12:28:15.928030] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:10.877 [2024-11-19 12:28:15.928086] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:10.877 [2024-11-19 12:28:15.928097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:10.877 12:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.877 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:10.877 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:10.877 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.877 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:10.877 12:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.877 12:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.877 12:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.877 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:10.877 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:10.877 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:10.877 12:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74388 00:08:10.877 12:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 74388 ']' 00:08:10.877 12:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 74388 00:08:10.877 12:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:10.877 12:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:10.877 12:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74388 00:08:10.877 12:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:10.877 12:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:10.877 killing process with pid 74388 00:08:10.877 12:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74388' 00:08:10.877 12:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 74388 00:08:10.877 [2024-11-19 12:28:16.026526] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:10.877 12:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 74388 00:08:10.877 [2024-11-19 12:28:16.027513] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:11.137 12:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:11.137 00:08:11.137 real 0m3.845s 00:08:11.137 user 0m5.983s 00:08:11.137 sys 0m0.827s 00:08:11.137 ************************************ 00:08:11.137 END TEST raid_state_function_test_sb 00:08:11.137 ************************************ 00:08:11.137 12:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.137 12:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.137 12:28:16 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:11.137 12:28:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:11.137 12:28:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:11.137 12:28:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:11.137 ************************************ 00:08:11.137 START TEST raid_superblock_test 00:08:11.137 ************************************ 00:08:11.137 12:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:08:11.137 12:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:11.137 12:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:11.137 12:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:11.137 12:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:11.137 12:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:11.137 12:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:11.137 12:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:11.137 12:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:11.137 12:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:11.137 12:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:11.137 12:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:11.137 12:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:11.137 12:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:11.137 12:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:11.137 12:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:11.137 12:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74629 00:08:11.137 12:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:11.137 12:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74629 00:08:11.137 12:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 74629 ']' 00:08:11.137 12:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.137 12:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:11.137 12:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.137 12:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:11.137 12:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.397 [2024-11-19 12:28:16.437085] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:11.397 [2024-11-19 12:28:16.437309] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74629 ] 00:08:11.397 [2024-11-19 12:28:16.598537] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.397 [2024-11-19 12:28:16.646222] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.657 [2024-11-19 12:28:16.690285] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.657 [2024-11-19 12:28:16.690413] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.228 12:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:12.228 12:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:12.228 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:12.228 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:12.228 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:12.228 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:12.228 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:12.228 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:12.228 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:12.228 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:12.228 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:12.228 12:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.228 12:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.228 malloc1 00:08:12.228 12:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.228 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:12.228 12:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.228 12:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.228 [2024-11-19 12:28:17.301584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:12.228 [2024-11-19 12:28:17.301770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.228 [2024-11-19 12:28:17.301810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:12.228 [2024-11-19 12:28:17.301862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.228 [2024-11-19 12:28:17.304004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.228 [2024-11-19 12:28:17.304098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:12.228 pt1 00:08:12.228 12:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.228 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:12.228 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:12.228 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:12.228 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:12.228 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:12.228 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:12.228 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:12.228 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:12.228 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:12.228 12:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.229 12:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.229 malloc2 00:08:12.229 12:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.229 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:12.229 12:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.229 12:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.229 [2024-11-19 12:28:17.341517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:12.229 [2024-11-19 12:28:17.341650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.229 [2024-11-19 12:28:17.341669] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:12.229 [2024-11-19 12:28:17.341681] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.229 [2024-11-19 12:28:17.343797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.229 [2024-11-19 12:28:17.343834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:12.229 pt2 00:08:12.229 12:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.229 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:12.229 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:12.229 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:12.229 12:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.229 12:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.229 [2024-11-19 12:28:17.353517] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:12.229 [2024-11-19 12:28:17.355330] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:12.229 [2024-11-19 12:28:17.355470] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:12.229 [2024-11-19 12:28:17.355486] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:12.229 [2024-11-19 12:28:17.355765] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:12.229 [2024-11-19 12:28:17.355908] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:12.229 [2024-11-19 12:28:17.355918] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:08:12.229 [2024-11-19 12:28:17.356040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.229 12:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.229 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:12.229 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:12.229 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.229 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:12.229 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:12.229 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.229 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.229 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.229 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.229 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.229 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.229 12:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.229 12:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.229 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:12.229 12:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.229 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.229 "name": "raid_bdev1", 00:08:12.229 "uuid": "eafe6120-a58c-4eef-b430-8bbd8fc42cdd", 00:08:12.229 "strip_size_kb": 0, 00:08:12.229 "state": "online", 00:08:12.229 "raid_level": "raid1", 00:08:12.229 "superblock": true, 00:08:12.229 "num_base_bdevs": 2, 00:08:12.229 "num_base_bdevs_discovered": 2, 00:08:12.229 "num_base_bdevs_operational": 2, 00:08:12.229 "base_bdevs_list": [ 00:08:12.229 { 00:08:12.229 "name": "pt1", 00:08:12.229 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:12.229 "is_configured": true, 00:08:12.229 "data_offset": 2048, 00:08:12.229 "data_size": 63488 00:08:12.229 }, 00:08:12.229 { 00:08:12.229 "name": "pt2", 00:08:12.229 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:12.229 "is_configured": true, 00:08:12.229 "data_offset": 2048, 00:08:12.229 "data_size": 63488 00:08:12.229 } 00:08:12.229 ] 00:08:12.229 }' 00:08:12.229 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.229 12:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.799 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:12.799 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:12.799 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:12.799 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:12.799 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:12.799 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:12.799 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:12.799 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:12.799 12:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.799 12:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.799 [2024-11-19 12:28:17.829061] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:12.799 12:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.799 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:12.799 "name": "raid_bdev1", 00:08:12.799 "aliases": [ 00:08:12.799 "eafe6120-a58c-4eef-b430-8bbd8fc42cdd" 00:08:12.799 ], 00:08:12.799 "product_name": "Raid Volume", 00:08:12.799 "block_size": 512, 00:08:12.799 "num_blocks": 63488, 00:08:12.799 "uuid": "eafe6120-a58c-4eef-b430-8bbd8fc42cdd", 00:08:12.799 "assigned_rate_limits": { 00:08:12.799 "rw_ios_per_sec": 0, 00:08:12.799 "rw_mbytes_per_sec": 0, 00:08:12.799 "r_mbytes_per_sec": 0, 00:08:12.799 "w_mbytes_per_sec": 0 00:08:12.799 }, 00:08:12.799 "claimed": false, 00:08:12.799 "zoned": false, 00:08:12.799 "supported_io_types": { 00:08:12.799 "read": true, 00:08:12.799 "write": true, 00:08:12.799 "unmap": false, 00:08:12.799 "flush": false, 00:08:12.799 "reset": true, 00:08:12.799 "nvme_admin": false, 00:08:12.799 "nvme_io": false, 00:08:12.799 "nvme_io_md": false, 00:08:12.799 "write_zeroes": true, 00:08:12.799 "zcopy": false, 00:08:12.799 "get_zone_info": false, 00:08:12.799 "zone_management": false, 00:08:12.799 "zone_append": false, 00:08:12.799 "compare": false, 00:08:12.799 "compare_and_write": false, 00:08:12.799 "abort": false, 00:08:12.799 "seek_hole": false, 00:08:12.799 "seek_data": false, 00:08:12.799 "copy": false, 00:08:12.799 "nvme_iov_md": false 00:08:12.799 }, 00:08:12.799 "memory_domains": [ 00:08:12.799 { 00:08:12.799 "dma_device_id": "system", 00:08:12.799 "dma_device_type": 1 00:08:12.799 }, 00:08:12.799 { 00:08:12.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.799 "dma_device_type": 2 00:08:12.799 }, 00:08:12.799 { 00:08:12.799 "dma_device_id": "system", 00:08:12.799 "dma_device_type": 1 00:08:12.799 }, 00:08:12.799 { 00:08:12.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.799 "dma_device_type": 2 00:08:12.799 } 00:08:12.799 ], 00:08:12.799 "driver_specific": { 00:08:12.799 "raid": { 00:08:12.799 "uuid": "eafe6120-a58c-4eef-b430-8bbd8fc42cdd", 00:08:12.799 "strip_size_kb": 0, 00:08:12.799 "state": "online", 00:08:12.799 "raid_level": "raid1", 00:08:12.799 "superblock": true, 00:08:12.799 "num_base_bdevs": 2, 00:08:12.799 "num_base_bdevs_discovered": 2, 00:08:12.799 "num_base_bdevs_operational": 2, 00:08:12.799 "base_bdevs_list": [ 00:08:12.799 { 00:08:12.799 "name": "pt1", 00:08:12.799 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:12.799 "is_configured": true, 00:08:12.799 "data_offset": 2048, 00:08:12.799 "data_size": 63488 00:08:12.799 }, 00:08:12.799 { 00:08:12.799 "name": "pt2", 00:08:12.799 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:12.799 "is_configured": true, 00:08:12.799 "data_offset": 2048, 00:08:12.799 "data_size": 63488 00:08:12.799 } 00:08:12.799 ] 00:08:12.799 } 00:08:12.799 } 00:08:12.799 }' 00:08:12.799 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:12.799 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:12.799 pt2' 00:08:12.799 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.799 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:12.799 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.799 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:12.800 12:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.800 12:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.800 12:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.800 12:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.800 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.800 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.800 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.800 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.800 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:12.800 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.800 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.800 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.060 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.060 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.060 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:13.060 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:13.060 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.060 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.060 [2024-11-19 12:28:18.072564] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.060 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.060 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=eafe6120-a58c-4eef-b430-8bbd8fc42cdd 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z eafe6120-a58c-4eef-b430-8bbd8fc42cdd ']' 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.061 [2024-11-19 12:28:18.120227] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:13.061 [2024-11-19 12:28:18.120319] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:13.061 [2024-11-19 12:28:18.120416] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.061 [2024-11-19 12:28:18.120508] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:13.061 [2024-11-19 12:28:18.120519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.061 [2024-11-19 12:28:18.268043] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:13.061 [2024-11-19 12:28:18.270033] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:13.061 [2024-11-19 12:28:18.270154] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:13.061 [2024-11-19 12:28:18.270208] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:13.061 [2024-11-19 12:28:18.270226] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:13.061 [2024-11-19 12:28:18.270235] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:08:13.061 request: 00:08:13.061 { 00:08:13.061 "name": "raid_bdev1", 00:08:13.061 "raid_level": "raid1", 00:08:13.061 "base_bdevs": [ 00:08:13.061 "malloc1", 00:08:13.061 "malloc2" 00:08:13.061 ], 00:08:13.061 "superblock": false, 00:08:13.061 "method": "bdev_raid_create", 00:08:13.061 "req_id": 1 00:08:13.061 } 00:08:13.061 Got JSON-RPC error response 00:08:13.061 response: 00:08:13.061 { 00:08:13.061 "code": -17, 00:08:13.061 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:13.061 } 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.061 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.321 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:13.321 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:13.321 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:13.321 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.321 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.321 [2024-11-19 12:28:18.335928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:13.322 [2024-11-19 12:28:18.336081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.322 [2024-11-19 12:28:18.336120] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:13.322 [2024-11-19 12:28:18.336147] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.322 [2024-11-19 12:28:18.338300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.322 [2024-11-19 12:28:18.338388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:13.322 [2024-11-19 12:28:18.338520] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:13.322 [2024-11-19 12:28:18.338588] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:13.322 pt1 00:08:13.322 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.322 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:13.322 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:13.322 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.322 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:13.322 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:13.322 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.322 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.322 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.322 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.322 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.322 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.322 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.322 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:13.322 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.322 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.322 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.322 "name": "raid_bdev1", 00:08:13.322 "uuid": "eafe6120-a58c-4eef-b430-8bbd8fc42cdd", 00:08:13.322 "strip_size_kb": 0, 00:08:13.322 "state": "configuring", 00:08:13.322 "raid_level": "raid1", 00:08:13.322 "superblock": true, 00:08:13.322 "num_base_bdevs": 2, 00:08:13.322 "num_base_bdevs_discovered": 1, 00:08:13.322 "num_base_bdevs_operational": 2, 00:08:13.322 "base_bdevs_list": [ 00:08:13.322 { 00:08:13.322 "name": "pt1", 00:08:13.322 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:13.322 "is_configured": true, 00:08:13.322 "data_offset": 2048, 00:08:13.322 "data_size": 63488 00:08:13.322 }, 00:08:13.322 { 00:08:13.322 "name": null, 00:08:13.322 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:13.322 "is_configured": false, 00:08:13.322 "data_offset": 2048, 00:08:13.322 "data_size": 63488 00:08:13.322 } 00:08:13.322 ] 00:08:13.322 }' 00:08:13.322 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.322 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.581 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:13.581 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:13.581 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:13.581 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:13.581 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.581 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.582 [2024-11-19 12:28:18.835098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:13.582 [2024-11-19 12:28:18.835176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.582 [2024-11-19 12:28:18.835200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:13.582 [2024-11-19 12:28:18.835210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.582 [2024-11-19 12:28:18.835652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.582 [2024-11-19 12:28:18.835683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:13.582 [2024-11-19 12:28:18.835778] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:13.582 [2024-11-19 12:28:18.835804] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:13.582 [2024-11-19 12:28:18.835928] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:13.582 [2024-11-19 12:28:18.835939] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:13.582 [2024-11-19 12:28:18.836174] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:13.582 [2024-11-19 12:28:18.836290] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:13.582 [2024-11-19 12:28:18.836306] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:13.582 [2024-11-19 12:28:18.836411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.841 pt2 00:08:13.841 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.841 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:13.841 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:13.841 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:13.841 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:13.841 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.841 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:13.841 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:13.841 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.841 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.841 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.841 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.841 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.841 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:13.841 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.841 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.841 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.841 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.841 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.841 "name": "raid_bdev1", 00:08:13.841 "uuid": "eafe6120-a58c-4eef-b430-8bbd8fc42cdd", 00:08:13.841 "strip_size_kb": 0, 00:08:13.841 "state": "online", 00:08:13.841 "raid_level": "raid1", 00:08:13.841 "superblock": true, 00:08:13.841 "num_base_bdevs": 2, 00:08:13.841 "num_base_bdevs_discovered": 2, 00:08:13.841 "num_base_bdevs_operational": 2, 00:08:13.841 "base_bdevs_list": [ 00:08:13.841 { 00:08:13.841 "name": "pt1", 00:08:13.841 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:13.841 "is_configured": true, 00:08:13.841 "data_offset": 2048, 00:08:13.842 "data_size": 63488 00:08:13.842 }, 00:08:13.842 { 00:08:13.842 "name": "pt2", 00:08:13.842 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:13.842 "is_configured": true, 00:08:13.842 "data_offset": 2048, 00:08:13.842 "data_size": 63488 00:08:13.842 } 00:08:13.842 ] 00:08:13.842 }' 00:08:13.842 12:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.842 12:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.100 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:14.100 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:14.100 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:14.100 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:14.100 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:14.100 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:14.100 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:14.100 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:14.100 12:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.100 12:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.100 [2024-11-19 12:28:19.315090] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:14.100 12:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.100 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:14.100 "name": "raid_bdev1", 00:08:14.100 "aliases": [ 00:08:14.100 "eafe6120-a58c-4eef-b430-8bbd8fc42cdd" 00:08:14.100 ], 00:08:14.100 "product_name": "Raid Volume", 00:08:14.100 "block_size": 512, 00:08:14.100 "num_blocks": 63488, 00:08:14.100 "uuid": "eafe6120-a58c-4eef-b430-8bbd8fc42cdd", 00:08:14.100 "assigned_rate_limits": { 00:08:14.100 "rw_ios_per_sec": 0, 00:08:14.100 "rw_mbytes_per_sec": 0, 00:08:14.100 "r_mbytes_per_sec": 0, 00:08:14.100 "w_mbytes_per_sec": 0 00:08:14.100 }, 00:08:14.100 "claimed": false, 00:08:14.100 "zoned": false, 00:08:14.100 "supported_io_types": { 00:08:14.100 "read": true, 00:08:14.100 "write": true, 00:08:14.100 "unmap": false, 00:08:14.100 "flush": false, 00:08:14.100 "reset": true, 00:08:14.100 "nvme_admin": false, 00:08:14.100 "nvme_io": false, 00:08:14.100 "nvme_io_md": false, 00:08:14.100 "write_zeroes": true, 00:08:14.100 "zcopy": false, 00:08:14.100 "get_zone_info": false, 00:08:14.100 "zone_management": false, 00:08:14.100 "zone_append": false, 00:08:14.100 "compare": false, 00:08:14.100 "compare_and_write": false, 00:08:14.100 "abort": false, 00:08:14.100 "seek_hole": false, 00:08:14.100 "seek_data": false, 00:08:14.100 "copy": false, 00:08:14.100 "nvme_iov_md": false 00:08:14.100 }, 00:08:14.100 "memory_domains": [ 00:08:14.100 { 00:08:14.100 "dma_device_id": "system", 00:08:14.100 "dma_device_type": 1 00:08:14.100 }, 00:08:14.100 { 00:08:14.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.100 "dma_device_type": 2 00:08:14.100 }, 00:08:14.100 { 00:08:14.100 "dma_device_id": "system", 00:08:14.100 "dma_device_type": 1 00:08:14.100 }, 00:08:14.100 { 00:08:14.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.100 "dma_device_type": 2 00:08:14.100 } 00:08:14.100 ], 00:08:14.100 "driver_specific": { 00:08:14.100 "raid": { 00:08:14.100 "uuid": "eafe6120-a58c-4eef-b430-8bbd8fc42cdd", 00:08:14.100 "strip_size_kb": 0, 00:08:14.100 "state": "online", 00:08:14.100 "raid_level": "raid1", 00:08:14.100 "superblock": true, 00:08:14.100 "num_base_bdevs": 2, 00:08:14.100 "num_base_bdevs_discovered": 2, 00:08:14.100 "num_base_bdevs_operational": 2, 00:08:14.100 "base_bdevs_list": [ 00:08:14.100 { 00:08:14.100 "name": "pt1", 00:08:14.100 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:14.100 "is_configured": true, 00:08:14.100 "data_offset": 2048, 00:08:14.100 "data_size": 63488 00:08:14.100 }, 00:08:14.100 { 00:08:14.100 "name": "pt2", 00:08:14.100 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.100 "is_configured": true, 00:08:14.100 "data_offset": 2048, 00:08:14.100 "data_size": 63488 00:08:14.100 } 00:08:14.100 ] 00:08:14.100 } 00:08:14.100 } 00:08:14.100 }' 00:08:14.100 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:14.360 pt2' 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.360 [2024-11-19 12:28:19.506673] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' eafe6120-a58c-4eef-b430-8bbd8fc42cdd '!=' eafe6120-a58c-4eef-b430-8bbd8fc42cdd ']' 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.360 [2024-11-19 12:28:19.550381] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.360 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.360 "name": "raid_bdev1", 00:08:14.360 "uuid": "eafe6120-a58c-4eef-b430-8bbd8fc42cdd", 00:08:14.360 "strip_size_kb": 0, 00:08:14.360 "state": "online", 00:08:14.360 "raid_level": "raid1", 00:08:14.360 "superblock": true, 00:08:14.360 "num_base_bdevs": 2, 00:08:14.360 "num_base_bdevs_discovered": 1, 00:08:14.360 "num_base_bdevs_operational": 1, 00:08:14.360 "base_bdevs_list": [ 00:08:14.360 { 00:08:14.360 "name": null, 00:08:14.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.360 "is_configured": false, 00:08:14.361 "data_offset": 0, 00:08:14.361 "data_size": 63488 00:08:14.361 }, 00:08:14.361 { 00:08:14.361 "name": "pt2", 00:08:14.361 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.361 "is_configured": true, 00:08:14.361 "data_offset": 2048, 00:08:14.361 "data_size": 63488 00:08:14.361 } 00:08:14.361 ] 00:08:14.361 }' 00:08:14.361 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.361 12:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.931 12:28:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:14.931 12:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.931 12:28:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.931 [2024-11-19 12:28:20.001652] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:14.931 [2024-11-19 12:28:20.001690] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:14.931 [2024-11-19 12:28:20.001787] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:14.931 [2024-11-19 12:28:20.001840] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:14.931 [2024-11-19 12:28:20.001850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.931 [2024-11-19 12:28:20.073503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:14.931 [2024-11-19 12:28:20.073562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.931 [2024-11-19 12:28:20.073581] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:14.931 [2024-11-19 12:28:20.073590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.931 [2024-11-19 12:28:20.075954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.931 [2024-11-19 12:28:20.076040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:14.931 [2024-11-19 12:28:20.076141] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:14.931 [2024-11-19 12:28:20.076208] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:14.931 [2024-11-19 12:28:20.076337] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:14.931 [2024-11-19 12:28:20.076376] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:14.931 [2024-11-19 12:28:20.076617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:14.931 [2024-11-19 12:28:20.076795] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:14.931 [2024-11-19 12:28:20.076845] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:14.931 [2024-11-19 12:28:20.077000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.931 pt2 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.931 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.931 "name": "raid_bdev1", 00:08:14.931 "uuid": "eafe6120-a58c-4eef-b430-8bbd8fc42cdd", 00:08:14.931 "strip_size_kb": 0, 00:08:14.931 "state": "online", 00:08:14.931 "raid_level": "raid1", 00:08:14.931 "superblock": true, 00:08:14.931 "num_base_bdevs": 2, 00:08:14.931 "num_base_bdevs_discovered": 1, 00:08:14.931 "num_base_bdevs_operational": 1, 00:08:14.931 "base_bdevs_list": [ 00:08:14.931 { 00:08:14.931 "name": null, 00:08:14.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.931 "is_configured": false, 00:08:14.931 "data_offset": 2048, 00:08:14.931 "data_size": 63488 00:08:14.931 }, 00:08:14.931 { 00:08:14.931 "name": "pt2", 00:08:14.931 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.931 "is_configured": true, 00:08:14.931 "data_offset": 2048, 00:08:14.931 "data_size": 63488 00:08:14.931 } 00:08:14.932 ] 00:08:14.932 }' 00:08:14.932 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.932 12:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.502 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:15.502 12:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.502 12:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.502 [2024-11-19 12:28:20.516846] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:15.502 [2024-11-19 12:28:20.516975] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:15.502 [2024-11-19 12:28:20.517089] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:15.502 [2024-11-19 12:28:20.517156] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:15.502 [2024-11-19 12:28:20.517215] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:15.502 12:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.502 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.502 12:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.502 12:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.502 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:15.502 12:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.502 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:15.502 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:15.502 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:15.502 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:15.502 12:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.502 12:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.502 [2024-11-19 12:28:20.580685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:15.502 [2024-11-19 12:28:20.580900] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.502 [2024-11-19 12:28:20.580945] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:15.502 [2024-11-19 12:28:20.580985] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.502 [2024-11-19 12:28:20.583248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.502 [2024-11-19 12:28:20.583329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:15.502 [2024-11-19 12:28:20.583437] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:15.502 [2024-11-19 12:28:20.583484] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:15.502 [2024-11-19 12:28:20.583598] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:15.502 [2024-11-19 12:28:20.583612] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:15.502 [2024-11-19 12:28:20.583633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:08:15.502 [2024-11-19 12:28:20.583674] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:15.502 [2024-11-19 12:28:20.583744] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:08:15.502 [2024-11-19 12:28:20.583755] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:15.503 [2024-11-19 12:28:20.583991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:15.503 [2024-11-19 12:28:20.584105] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:08:15.503 [2024-11-19 12:28:20.584121] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:08:15.503 [2024-11-19 12:28:20.584233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.503 pt1 00:08:15.503 12:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.503 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:15.503 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:15.503 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:15.503 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.503 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:15.503 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:15.503 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:15.503 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.503 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.503 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.503 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.503 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:15.503 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.503 12:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.503 12:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.503 12:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.503 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.503 "name": "raid_bdev1", 00:08:15.503 "uuid": "eafe6120-a58c-4eef-b430-8bbd8fc42cdd", 00:08:15.503 "strip_size_kb": 0, 00:08:15.503 "state": "online", 00:08:15.503 "raid_level": "raid1", 00:08:15.503 "superblock": true, 00:08:15.503 "num_base_bdevs": 2, 00:08:15.503 "num_base_bdevs_discovered": 1, 00:08:15.503 "num_base_bdevs_operational": 1, 00:08:15.503 "base_bdevs_list": [ 00:08:15.503 { 00:08:15.503 "name": null, 00:08:15.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.503 "is_configured": false, 00:08:15.503 "data_offset": 2048, 00:08:15.503 "data_size": 63488 00:08:15.503 }, 00:08:15.503 { 00:08:15.503 "name": "pt2", 00:08:15.503 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:15.503 "is_configured": true, 00:08:15.503 "data_offset": 2048, 00:08:15.503 "data_size": 63488 00:08:15.503 } 00:08:15.503 ] 00:08:15.503 }' 00:08:15.503 12:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.503 12:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.074 12:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:16.074 12:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:16.074 12:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.074 12:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.074 12:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.074 12:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:16.074 12:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:16.074 12:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:16.074 12:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.074 12:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.074 [2024-11-19 12:28:21.064090] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:16.074 12:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.074 12:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' eafe6120-a58c-4eef-b430-8bbd8fc42cdd '!=' eafe6120-a58c-4eef-b430-8bbd8fc42cdd ']' 00:08:16.074 12:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74629 00:08:16.074 12:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 74629 ']' 00:08:16.074 12:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 74629 00:08:16.074 12:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:16.074 12:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:16.074 12:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74629 00:08:16.074 12:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:16.074 killing process with pid 74629 00:08:16.074 12:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:16.074 12:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74629' 00:08:16.074 12:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 74629 00:08:16.074 [2024-11-19 12:28:21.140831] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:16.074 [2024-11-19 12:28:21.140939] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.074 [2024-11-19 12:28:21.140987] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:16.074 [2024-11-19 12:28:21.140996] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:08:16.074 12:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 74629 00:08:16.074 [2024-11-19 12:28:21.164038] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:16.334 ************************************ 00:08:16.334 END TEST raid_superblock_test 00:08:16.335 ************************************ 00:08:16.335 12:28:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:16.335 00:08:16.335 real 0m5.063s 00:08:16.335 user 0m8.262s 00:08:16.335 sys 0m1.059s 00:08:16.335 12:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.335 12:28:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.335 12:28:21 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:16.335 12:28:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:16.335 12:28:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.335 12:28:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:16.335 ************************************ 00:08:16.335 START TEST raid_read_error_test 00:08:16.335 ************************************ 00:08:16.335 12:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:08:16.335 12:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:16.335 12:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:16.335 12:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:16.335 12:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:16.335 12:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.335 12:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:16.335 12:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:16.335 12:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.335 12:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:16.335 12:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:16.335 12:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.335 12:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:16.335 12:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:16.335 12:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:16.335 12:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:16.335 12:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:16.335 12:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:16.335 12:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:16.335 12:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:16.335 12:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:16.335 12:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:16.335 12:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.TFXX7Dziky 00:08:16.335 12:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74948 00:08:16.335 12:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:16.335 12:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74948 00:08:16.335 12:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 74948 ']' 00:08:16.335 12:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.335 12:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:16.335 12:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.335 12:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:16.335 12:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.335 [2024-11-19 12:28:21.584156] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:16.335 [2024-11-19 12:28:21.584358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74948 ] 00:08:16.596 [2024-11-19 12:28:21.742806] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.596 [2024-11-19 12:28:21.787702] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.596 [2024-11-19 12:28:21.830667] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:16.596 [2024-11-19 12:28:21.830712] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.165 12:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:17.165 12:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:17.165 12:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:17.165 12:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:17.165 12:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.165 12:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.425 BaseBdev1_malloc 00:08:17.425 12:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.425 12:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:17.425 12:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.425 12:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.425 true 00:08:17.425 12:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.425 12:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:17.425 12:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.425 12:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.425 [2024-11-19 12:28:22.452682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:17.425 [2024-11-19 12:28:22.452807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.425 [2024-11-19 12:28:22.452851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:17.425 [2024-11-19 12:28:22.452862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.426 [2024-11-19 12:28:22.454884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.426 [2024-11-19 12:28:22.454919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:17.426 BaseBdev1 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.426 BaseBdev2_malloc 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.426 true 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.426 [2024-11-19 12:28:22.504379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:17.426 [2024-11-19 12:28:22.504486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.426 [2024-11-19 12:28:22.504522] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:17.426 [2024-11-19 12:28:22.504531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.426 [2024-11-19 12:28:22.506490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.426 [2024-11-19 12:28:22.506525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:17.426 BaseBdev2 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.426 [2024-11-19 12:28:22.516395] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:17.426 [2024-11-19 12:28:22.518161] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:17.426 [2024-11-19 12:28:22.518329] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:17.426 [2024-11-19 12:28:22.518342] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:17.426 [2024-11-19 12:28:22.518588] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:17.426 [2024-11-19 12:28:22.518747] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:17.426 [2024-11-19 12:28:22.518771] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:17.426 [2024-11-19 12:28:22.518884] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.426 "name": "raid_bdev1", 00:08:17.426 "uuid": "ed2f5bb4-eb8e-466e-974a-efd0115434c2", 00:08:17.426 "strip_size_kb": 0, 00:08:17.426 "state": "online", 00:08:17.426 "raid_level": "raid1", 00:08:17.426 "superblock": true, 00:08:17.426 "num_base_bdevs": 2, 00:08:17.426 "num_base_bdevs_discovered": 2, 00:08:17.426 "num_base_bdevs_operational": 2, 00:08:17.426 "base_bdevs_list": [ 00:08:17.426 { 00:08:17.426 "name": "BaseBdev1", 00:08:17.426 "uuid": "331688dc-aba7-5b11-ad6e-7594283e78e2", 00:08:17.426 "is_configured": true, 00:08:17.426 "data_offset": 2048, 00:08:17.426 "data_size": 63488 00:08:17.426 }, 00:08:17.426 { 00:08:17.426 "name": "BaseBdev2", 00:08:17.426 "uuid": "de5c2007-4ac7-5e0a-8f8b-1baf6fcdb944", 00:08:17.426 "is_configured": true, 00:08:17.426 "data_offset": 2048, 00:08:17.426 "data_size": 63488 00:08:17.426 } 00:08:17.426 ] 00:08:17.426 }' 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.426 12:28:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.686 12:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:17.686 12:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:17.976 [2024-11-19 12:28:23.035891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:18.924 12:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:18.924 12:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.924 12:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.924 12:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.924 12:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:18.924 12:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:18.924 12:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:18.924 12:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:18.924 12:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:18.924 12:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:18.924 12:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.924 12:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:18.924 12:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:18.924 12:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:18.924 12:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.924 12:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.924 12:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.924 12:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.924 12:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.924 12:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.924 12:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.924 12:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.924 12:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.924 12:28:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.924 "name": "raid_bdev1", 00:08:18.924 "uuid": "ed2f5bb4-eb8e-466e-974a-efd0115434c2", 00:08:18.924 "strip_size_kb": 0, 00:08:18.924 "state": "online", 00:08:18.924 "raid_level": "raid1", 00:08:18.924 "superblock": true, 00:08:18.924 "num_base_bdevs": 2, 00:08:18.924 "num_base_bdevs_discovered": 2, 00:08:18.924 "num_base_bdevs_operational": 2, 00:08:18.924 "base_bdevs_list": [ 00:08:18.924 { 00:08:18.924 "name": "BaseBdev1", 00:08:18.924 "uuid": "331688dc-aba7-5b11-ad6e-7594283e78e2", 00:08:18.924 "is_configured": true, 00:08:18.925 "data_offset": 2048, 00:08:18.925 "data_size": 63488 00:08:18.925 }, 00:08:18.925 { 00:08:18.925 "name": "BaseBdev2", 00:08:18.925 "uuid": "de5c2007-4ac7-5e0a-8f8b-1baf6fcdb944", 00:08:18.925 "is_configured": true, 00:08:18.925 "data_offset": 2048, 00:08:18.925 "data_size": 63488 00:08:18.925 } 00:08:18.925 ] 00:08:18.925 }' 00:08:18.925 12:28:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.925 12:28:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.184 12:28:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:19.184 12:28:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.184 12:28:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.184 [2024-11-19 12:28:24.394919] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:19.184 [2024-11-19 12:28:24.395039] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:19.184 [2024-11-19 12:28:24.397519] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:19.184 [2024-11-19 12:28:24.397622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.184 [2024-11-19 12:28:24.397726] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:19.184 [2024-11-19 12:28:24.397790] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:19.184 12:28:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.184 { 00:08:19.184 "results": [ 00:08:19.184 { 00:08:19.184 "job": "raid_bdev1", 00:08:19.185 "core_mask": "0x1", 00:08:19.185 "workload": "randrw", 00:08:19.185 "percentage": 50, 00:08:19.185 "status": "finished", 00:08:19.185 "queue_depth": 1, 00:08:19.185 "io_size": 131072, 00:08:19.185 "runtime": 1.359958, 00:08:19.185 "iops": 20357.246326724795, 00:08:19.185 "mibps": 2544.6557908405994, 00:08:19.185 "io_failed": 0, 00:08:19.185 "io_timeout": 0, 00:08:19.185 "avg_latency_us": 46.67545091259829, 00:08:19.185 "min_latency_us": 21.575545851528386, 00:08:19.185 "max_latency_us": 1373.6803493449781 00:08:19.185 } 00:08:19.185 ], 00:08:19.185 "core_count": 1 00:08:19.185 } 00:08:19.185 12:28:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74948 00:08:19.185 12:28:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 74948 ']' 00:08:19.185 12:28:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 74948 00:08:19.185 12:28:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:19.185 12:28:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:19.185 12:28:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74948 00:08:19.185 12:28:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:19.185 12:28:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:19.444 12:28:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74948' 00:08:19.444 killing process with pid 74948 00:08:19.444 12:28:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 74948 00:08:19.444 [2024-11-19 12:28:24.444335] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:19.444 12:28:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 74948 00:08:19.444 [2024-11-19 12:28:24.459927] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:19.444 12:28:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.TFXX7Dziky 00:08:19.444 12:28:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:19.444 12:28:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:19.703 ************************************ 00:08:19.703 END TEST raid_read_error_test 00:08:19.703 ************************************ 00:08:19.703 12:28:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:19.704 12:28:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:19.704 12:28:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:19.704 12:28:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:19.704 12:28:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:19.704 00:08:19.704 real 0m3.224s 00:08:19.704 user 0m4.057s 00:08:19.704 sys 0m0.523s 00:08:19.704 12:28:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.704 12:28:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.704 12:28:24 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:19.704 12:28:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:19.704 12:28:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.704 12:28:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:19.704 ************************************ 00:08:19.704 START TEST raid_write_error_test 00:08:19.704 ************************************ 00:08:19.704 12:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:08:19.704 12:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:19.704 12:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:19.704 12:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:19.704 12:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:19.704 12:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:19.704 12:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:19.704 12:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:19.704 12:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:19.704 12:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:19.704 12:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:19.704 12:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:19.704 12:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:19.704 12:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:19.704 12:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:19.704 12:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:19.704 12:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:19.704 12:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:19.704 12:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:19.704 12:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:19.704 12:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:19.704 12:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:19.704 12:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.78gWGvGgxz 00:08:19.704 12:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75077 00:08:19.704 12:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:19.704 12:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75077 00:08:19.704 12:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 75077 ']' 00:08:19.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.704 12:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.704 12:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:19.704 12:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.704 12:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:19.704 12:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.704 [2024-11-19 12:28:24.894940] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:19.704 [2024-11-19 12:28:24.895095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75077 ] 00:08:19.964 [2024-11-19 12:28:25.064207] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.964 [2024-11-19 12:28:25.109118] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.964 [2024-11-19 12:28:25.151750] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:19.964 [2024-11-19 12:28:25.151793] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.533 12:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:20.533 12:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:20.533 12:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:20.533 12:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:20.533 12:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.533 12:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.533 BaseBdev1_malloc 00:08:20.533 12:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.533 12:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:20.533 12:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.533 12:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.533 true 00:08:20.533 12:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.533 12:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:20.533 12:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.533 12:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.533 [2024-11-19 12:28:25.745692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:20.533 [2024-11-19 12:28:25.745769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.533 [2024-11-19 12:28:25.745792] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:20.533 [2024-11-19 12:28:25.745801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.533 [2024-11-19 12:28:25.747852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.533 [2024-11-19 12:28:25.747957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:20.533 BaseBdev1 00:08:20.533 12:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.533 12:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:20.533 12:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:20.533 12:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.533 12:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.533 BaseBdev2_malloc 00:08:20.533 12:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.533 12:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:20.533 12:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.533 12:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.533 true 00:08:20.533 12:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.533 12:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:20.533 12:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.533 12:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.793 [2024-11-19 12:28:25.794103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:20.793 [2024-11-19 12:28:25.794220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.793 [2024-11-19 12:28:25.794241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:20.793 [2024-11-19 12:28:25.794249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.793 [2024-11-19 12:28:25.796393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.793 [2024-11-19 12:28:25.796432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:20.793 BaseBdev2 00:08:20.793 12:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.793 12:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:20.793 12:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.793 12:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.794 [2024-11-19 12:28:25.806108] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:20.794 [2024-11-19 12:28:25.807899] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:20.794 [2024-11-19 12:28:25.808068] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:20.794 [2024-11-19 12:28:25.808081] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:20.794 [2024-11-19 12:28:25.808320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:20.794 [2024-11-19 12:28:25.808448] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:20.794 [2024-11-19 12:28:25.808461] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:20.794 [2024-11-19 12:28:25.808587] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.794 12:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.794 12:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:20.794 12:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.794 12:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.794 12:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:20.794 12:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:20.794 12:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:20.794 12:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.794 12:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.794 12:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.794 12:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.794 12:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.794 12:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.794 12:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.794 12:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.794 12:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.794 12:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.794 "name": "raid_bdev1", 00:08:20.794 "uuid": "2a4cced6-fd20-4c7a-a89b-aa4f4559f94d", 00:08:20.794 "strip_size_kb": 0, 00:08:20.794 "state": "online", 00:08:20.794 "raid_level": "raid1", 00:08:20.794 "superblock": true, 00:08:20.794 "num_base_bdevs": 2, 00:08:20.794 "num_base_bdevs_discovered": 2, 00:08:20.794 "num_base_bdevs_operational": 2, 00:08:20.794 "base_bdevs_list": [ 00:08:20.794 { 00:08:20.794 "name": "BaseBdev1", 00:08:20.794 "uuid": "283bf7ca-244e-57e2-b1c7-83c9d9e8dab1", 00:08:20.794 "is_configured": true, 00:08:20.794 "data_offset": 2048, 00:08:20.794 "data_size": 63488 00:08:20.794 }, 00:08:20.794 { 00:08:20.794 "name": "BaseBdev2", 00:08:20.794 "uuid": "55b8fa32-fc59-53df-ac30-02a7218abed5", 00:08:20.794 "is_configured": true, 00:08:20.794 "data_offset": 2048, 00:08:20.794 "data_size": 63488 00:08:20.794 } 00:08:20.794 ] 00:08:20.794 }' 00:08:20.794 12:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.794 12:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.054 12:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:21.054 12:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:21.315 [2024-11-19 12:28:26.369563] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:22.258 12:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:22.258 12:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.258 12:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.258 [2024-11-19 12:28:27.286044] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:22.258 [2024-11-19 12:28:27.286110] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:22.258 [2024-11-19 12:28:27.286317] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:08:22.258 12:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.258 12:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:22.258 12:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:22.258 12:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:22.258 12:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:22.258 12:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:22.258 12:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:22.258 12:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:22.258 12:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:22.258 12:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:22.258 12:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:22.258 12:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.258 12:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.258 12:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.258 12:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.258 12:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.258 12:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:22.258 12:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.258 12:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.258 12:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.258 12:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.258 "name": "raid_bdev1", 00:08:22.258 "uuid": "2a4cced6-fd20-4c7a-a89b-aa4f4559f94d", 00:08:22.258 "strip_size_kb": 0, 00:08:22.258 "state": "online", 00:08:22.258 "raid_level": "raid1", 00:08:22.258 "superblock": true, 00:08:22.258 "num_base_bdevs": 2, 00:08:22.258 "num_base_bdevs_discovered": 1, 00:08:22.258 "num_base_bdevs_operational": 1, 00:08:22.258 "base_bdevs_list": [ 00:08:22.258 { 00:08:22.258 "name": null, 00:08:22.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.258 "is_configured": false, 00:08:22.258 "data_offset": 0, 00:08:22.258 "data_size": 63488 00:08:22.258 }, 00:08:22.258 { 00:08:22.258 "name": "BaseBdev2", 00:08:22.258 "uuid": "55b8fa32-fc59-53df-ac30-02a7218abed5", 00:08:22.258 "is_configured": true, 00:08:22.258 "data_offset": 2048, 00:08:22.258 "data_size": 63488 00:08:22.258 } 00:08:22.258 ] 00:08:22.258 }' 00:08:22.258 12:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.258 12:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.519 12:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:22.519 12:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.519 12:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.519 [2024-11-19 12:28:27.679829] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:22.519 [2024-11-19 12:28:27.679970] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:22.519 [2024-11-19 12:28:27.682472] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:22.519 [2024-11-19 12:28:27.682563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.519 [2024-11-19 12:28:27.682632] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:22.519 [2024-11-19 12:28:27.682703] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:22.519 { 00:08:22.519 "results": [ 00:08:22.519 { 00:08:22.519 "job": "raid_bdev1", 00:08:22.519 "core_mask": "0x1", 00:08:22.519 "workload": "randrw", 00:08:22.519 "percentage": 50, 00:08:22.519 "status": "finished", 00:08:22.519 "queue_depth": 1, 00:08:22.519 "io_size": 131072, 00:08:22.519 "runtime": 1.311113, 00:08:22.519 "iops": 22699.035094610455, 00:08:22.519 "mibps": 2837.379386826307, 00:08:22.519 "io_failed": 0, 00:08:22.519 "io_timeout": 0, 00:08:22.520 "avg_latency_us": 41.56379658675248, 00:08:22.520 "min_latency_us": 21.910917030567685, 00:08:22.520 "max_latency_us": 1373.6803493449781 00:08:22.520 } 00:08:22.520 ], 00:08:22.520 "core_count": 1 00:08:22.520 } 00:08:22.520 12:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.520 12:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75077 00:08:22.520 12:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 75077 ']' 00:08:22.520 12:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 75077 00:08:22.520 12:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:22.520 12:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:22.520 12:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75077 00:08:22.520 killing process with pid 75077 00:08:22.520 12:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:22.520 12:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:22.520 12:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75077' 00:08:22.520 12:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 75077 00:08:22.520 [2024-11-19 12:28:27.730119] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:22.520 12:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 75077 00:08:22.520 [2024-11-19 12:28:27.746083] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:22.780 12:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:22.780 12:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.78gWGvGgxz 00:08:22.780 12:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:22.780 12:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:22.780 12:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:22.780 12:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:22.780 12:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:22.780 12:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:22.780 00:08:22.780 real 0m3.216s 00:08:22.780 user 0m4.038s 00:08:22.780 sys 0m0.539s 00:08:22.780 12:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.780 ************************************ 00:08:22.780 END TEST raid_write_error_test 00:08:22.780 ************************************ 00:08:22.780 12:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.039 12:28:28 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:23.039 12:28:28 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:23.039 12:28:28 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:23.039 12:28:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:23.039 12:28:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.039 12:28:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:23.039 ************************************ 00:08:23.039 START TEST raid_state_function_test 00:08:23.039 ************************************ 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=75204 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:23.039 Process raid pid: 75204 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75204' 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 75204 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 75204 ']' 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:23.039 12:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.039 [2024-11-19 12:28:28.169210] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:23.039 [2024-11-19 12:28:28.169429] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.299 [2024-11-19 12:28:28.315390] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.299 [2024-11-19 12:28:28.359983] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.299 [2024-11-19 12:28:28.402891] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.299 [2024-11-19 12:28:28.403016] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.868 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:23.869 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:23.869 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:23.869 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.869 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.869 [2024-11-19 12:28:29.016736] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:23.869 [2024-11-19 12:28:29.016807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:23.869 [2024-11-19 12:28:29.016838] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:23.869 [2024-11-19 12:28:29.016849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:23.869 [2024-11-19 12:28:29.016855] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:23.869 [2024-11-19 12:28:29.016866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:23.869 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.869 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:23.869 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.869 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.869 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.869 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.869 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.869 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.869 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.869 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.869 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.869 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.869 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.869 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.869 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.869 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.869 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.869 "name": "Existed_Raid", 00:08:23.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.869 "strip_size_kb": 64, 00:08:23.869 "state": "configuring", 00:08:23.869 "raid_level": "raid0", 00:08:23.869 "superblock": false, 00:08:23.869 "num_base_bdevs": 3, 00:08:23.869 "num_base_bdevs_discovered": 0, 00:08:23.869 "num_base_bdevs_operational": 3, 00:08:23.869 "base_bdevs_list": [ 00:08:23.869 { 00:08:23.869 "name": "BaseBdev1", 00:08:23.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.869 "is_configured": false, 00:08:23.869 "data_offset": 0, 00:08:23.869 "data_size": 0 00:08:23.869 }, 00:08:23.869 { 00:08:23.869 "name": "BaseBdev2", 00:08:23.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.869 "is_configured": false, 00:08:23.869 "data_offset": 0, 00:08:23.869 "data_size": 0 00:08:23.869 }, 00:08:23.869 { 00:08:23.869 "name": "BaseBdev3", 00:08:23.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.869 "is_configured": false, 00:08:23.869 "data_offset": 0, 00:08:23.869 "data_size": 0 00:08:23.869 } 00:08:23.869 ] 00:08:23.869 }' 00:08:23.869 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.869 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.137 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:24.137 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.137 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.137 [2024-11-19 12:28:29.388008] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:24.137 [2024-11-19 12:28:29.388055] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:24.137 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.137 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:24.137 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.137 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.402 [2024-11-19 12:28:29.400014] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:24.402 [2024-11-19 12:28:29.400055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:24.402 [2024-11-19 12:28:29.400064] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:24.402 [2024-11-19 12:28:29.400073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:24.402 [2024-11-19 12:28:29.400079] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:24.402 [2024-11-19 12:28:29.400088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:24.402 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.402 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:24.402 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.402 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.402 [2024-11-19 12:28:29.420595] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:24.402 BaseBdev1 00:08:24.402 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.402 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:24.402 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:24.402 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:24.402 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:24.402 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:24.403 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:24.403 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:24.403 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.403 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.403 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.403 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:24.403 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.403 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.403 [ 00:08:24.403 { 00:08:24.403 "name": "BaseBdev1", 00:08:24.403 "aliases": [ 00:08:24.403 "d04667b5-39f6-47bc-98c6-c955d7b6fdaf" 00:08:24.403 ], 00:08:24.403 "product_name": "Malloc disk", 00:08:24.403 "block_size": 512, 00:08:24.403 "num_blocks": 65536, 00:08:24.403 "uuid": "d04667b5-39f6-47bc-98c6-c955d7b6fdaf", 00:08:24.403 "assigned_rate_limits": { 00:08:24.403 "rw_ios_per_sec": 0, 00:08:24.403 "rw_mbytes_per_sec": 0, 00:08:24.403 "r_mbytes_per_sec": 0, 00:08:24.403 "w_mbytes_per_sec": 0 00:08:24.403 }, 00:08:24.403 "claimed": true, 00:08:24.403 "claim_type": "exclusive_write", 00:08:24.403 "zoned": false, 00:08:24.403 "supported_io_types": { 00:08:24.403 "read": true, 00:08:24.403 "write": true, 00:08:24.403 "unmap": true, 00:08:24.403 "flush": true, 00:08:24.403 "reset": true, 00:08:24.403 "nvme_admin": false, 00:08:24.403 "nvme_io": false, 00:08:24.403 "nvme_io_md": false, 00:08:24.403 "write_zeroes": true, 00:08:24.403 "zcopy": true, 00:08:24.403 "get_zone_info": false, 00:08:24.403 "zone_management": false, 00:08:24.403 "zone_append": false, 00:08:24.403 "compare": false, 00:08:24.403 "compare_and_write": false, 00:08:24.403 "abort": true, 00:08:24.403 "seek_hole": false, 00:08:24.403 "seek_data": false, 00:08:24.403 "copy": true, 00:08:24.403 "nvme_iov_md": false 00:08:24.403 }, 00:08:24.403 "memory_domains": [ 00:08:24.403 { 00:08:24.403 "dma_device_id": "system", 00:08:24.403 "dma_device_type": 1 00:08:24.403 }, 00:08:24.403 { 00:08:24.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.403 "dma_device_type": 2 00:08:24.403 } 00:08:24.403 ], 00:08:24.403 "driver_specific": {} 00:08:24.403 } 00:08:24.403 ] 00:08:24.403 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.403 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:24.403 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:24.403 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.403 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.403 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.403 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.403 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.403 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.403 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.403 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.403 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.403 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.403 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.403 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.403 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.403 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.403 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.403 "name": "Existed_Raid", 00:08:24.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.403 "strip_size_kb": 64, 00:08:24.403 "state": "configuring", 00:08:24.403 "raid_level": "raid0", 00:08:24.403 "superblock": false, 00:08:24.403 "num_base_bdevs": 3, 00:08:24.403 "num_base_bdevs_discovered": 1, 00:08:24.403 "num_base_bdevs_operational": 3, 00:08:24.403 "base_bdevs_list": [ 00:08:24.403 { 00:08:24.403 "name": "BaseBdev1", 00:08:24.403 "uuid": "d04667b5-39f6-47bc-98c6-c955d7b6fdaf", 00:08:24.403 "is_configured": true, 00:08:24.403 "data_offset": 0, 00:08:24.403 "data_size": 65536 00:08:24.403 }, 00:08:24.403 { 00:08:24.403 "name": "BaseBdev2", 00:08:24.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.403 "is_configured": false, 00:08:24.403 "data_offset": 0, 00:08:24.403 "data_size": 0 00:08:24.403 }, 00:08:24.403 { 00:08:24.403 "name": "BaseBdev3", 00:08:24.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.403 "is_configured": false, 00:08:24.403 "data_offset": 0, 00:08:24.403 "data_size": 0 00:08:24.403 } 00:08:24.403 ] 00:08:24.403 }' 00:08:24.403 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.403 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.663 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:24.663 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.663 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.663 [2024-11-19 12:28:29.847891] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:24.663 [2024-11-19 12:28:29.847938] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:24.663 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.663 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:24.663 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.663 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.663 [2024-11-19 12:28:29.859905] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:24.663 [2024-11-19 12:28:29.861680] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:24.663 [2024-11-19 12:28:29.861717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:24.663 [2024-11-19 12:28:29.861727] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:24.663 [2024-11-19 12:28:29.861753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:24.663 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.663 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:24.663 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:24.663 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:24.663 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.663 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.663 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.663 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.663 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.663 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.663 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.663 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.663 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.663 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.663 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.663 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.663 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.663 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.663 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.663 "name": "Existed_Raid", 00:08:24.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.663 "strip_size_kb": 64, 00:08:24.663 "state": "configuring", 00:08:24.663 "raid_level": "raid0", 00:08:24.663 "superblock": false, 00:08:24.663 "num_base_bdevs": 3, 00:08:24.663 "num_base_bdevs_discovered": 1, 00:08:24.663 "num_base_bdevs_operational": 3, 00:08:24.663 "base_bdevs_list": [ 00:08:24.663 { 00:08:24.663 "name": "BaseBdev1", 00:08:24.663 "uuid": "d04667b5-39f6-47bc-98c6-c955d7b6fdaf", 00:08:24.663 "is_configured": true, 00:08:24.663 "data_offset": 0, 00:08:24.663 "data_size": 65536 00:08:24.663 }, 00:08:24.663 { 00:08:24.663 "name": "BaseBdev2", 00:08:24.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.663 "is_configured": false, 00:08:24.663 "data_offset": 0, 00:08:24.663 "data_size": 0 00:08:24.663 }, 00:08:24.663 { 00:08:24.663 "name": "BaseBdev3", 00:08:24.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.663 "is_configured": false, 00:08:24.663 "data_offset": 0, 00:08:24.663 "data_size": 0 00:08:24.663 } 00:08:24.663 ] 00:08:24.663 }' 00:08:24.663 12:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.933 12:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.194 [2024-11-19 12:28:30.322052] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:25.194 BaseBdev2 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.194 [ 00:08:25.194 { 00:08:25.194 "name": "BaseBdev2", 00:08:25.194 "aliases": [ 00:08:25.194 "7fecb13d-ee9a-4fcf-8ba5-3896a8427f3e" 00:08:25.194 ], 00:08:25.194 "product_name": "Malloc disk", 00:08:25.194 "block_size": 512, 00:08:25.194 "num_blocks": 65536, 00:08:25.194 "uuid": "7fecb13d-ee9a-4fcf-8ba5-3896a8427f3e", 00:08:25.194 "assigned_rate_limits": { 00:08:25.194 "rw_ios_per_sec": 0, 00:08:25.194 "rw_mbytes_per_sec": 0, 00:08:25.194 "r_mbytes_per_sec": 0, 00:08:25.194 "w_mbytes_per_sec": 0 00:08:25.194 }, 00:08:25.194 "claimed": true, 00:08:25.194 "claim_type": "exclusive_write", 00:08:25.194 "zoned": false, 00:08:25.194 "supported_io_types": { 00:08:25.194 "read": true, 00:08:25.194 "write": true, 00:08:25.194 "unmap": true, 00:08:25.194 "flush": true, 00:08:25.194 "reset": true, 00:08:25.194 "nvme_admin": false, 00:08:25.194 "nvme_io": false, 00:08:25.194 "nvme_io_md": false, 00:08:25.194 "write_zeroes": true, 00:08:25.194 "zcopy": true, 00:08:25.194 "get_zone_info": false, 00:08:25.194 "zone_management": false, 00:08:25.194 "zone_append": false, 00:08:25.194 "compare": false, 00:08:25.194 "compare_and_write": false, 00:08:25.194 "abort": true, 00:08:25.194 "seek_hole": false, 00:08:25.194 "seek_data": false, 00:08:25.194 "copy": true, 00:08:25.194 "nvme_iov_md": false 00:08:25.194 }, 00:08:25.194 "memory_domains": [ 00:08:25.194 { 00:08:25.194 "dma_device_id": "system", 00:08:25.194 "dma_device_type": 1 00:08:25.194 }, 00:08:25.194 { 00:08:25.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.194 "dma_device_type": 2 00:08:25.194 } 00:08:25.194 ], 00:08:25.194 "driver_specific": {} 00:08:25.194 } 00:08:25.194 ] 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.194 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.195 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.195 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.195 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.195 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.195 "name": "Existed_Raid", 00:08:25.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.195 "strip_size_kb": 64, 00:08:25.195 "state": "configuring", 00:08:25.195 "raid_level": "raid0", 00:08:25.195 "superblock": false, 00:08:25.195 "num_base_bdevs": 3, 00:08:25.195 "num_base_bdevs_discovered": 2, 00:08:25.195 "num_base_bdevs_operational": 3, 00:08:25.195 "base_bdevs_list": [ 00:08:25.195 { 00:08:25.195 "name": "BaseBdev1", 00:08:25.195 "uuid": "d04667b5-39f6-47bc-98c6-c955d7b6fdaf", 00:08:25.195 "is_configured": true, 00:08:25.195 "data_offset": 0, 00:08:25.195 "data_size": 65536 00:08:25.195 }, 00:08:25.195 { 00:08:25.195 "name": "BaseBdev2", 00:08:25.195 "uuid": "7fecb13d-ee9a-4fcf-8ba5-3896a8427f3e", 00:08:25.195 "is_configured": true, 00:08:25.195 "data_offset": 0, 00:08:25.195 "data_size": 65536 00:08:25.195 }, 00:08:25.195 { 00:08:25.195 "name": "BaseBdev3", 00:08:25.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.195 "is_configured": false, 00:08:25.195 "data_offset": 0, 00:08:25.195 "data_size": 0 00:08:25.195 } 00:08:25.195 ] 00:08:25.195 }' 00:08:25.195 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.195 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.764 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:25.764 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.764 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.764 BaseBdev3 00:08:25.764 [2024-11-19 12:28:30.800103] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:25.764 [2024-11-19 12:28:30.800148] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:25.764 [2024-11-19 12:28:30.800160] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:25.764 [2024-11-19 12:28:30.800451] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:25.764 [2024-11-19 12:28:30.800586] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:25.764 [2024-11-19 12:28:30.800600] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:25.764 [2024-11-19 12:28:30.800803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:25.764 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.764 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:25.764 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:25.764 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:25.764 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:25.764 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:25.764 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:25.764 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:25.764 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.764 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.764 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.764 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:25.764 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.764 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.764 [ 00:08:25.764 { 00:08:25.764 "name": "BaseBdev3", 00:08:25.764 "aliases": [ 00:08:25.764 "1641f1ba-66ac-461a-a2f7-edbddf563d63" 00:08:25.764 ], 00:08:25.764 "product_name": "Malloc disk", 00:08:25.764 "block_size": 512, 00:08:25.764 "num_blocks": 65536, 00:08:25.764 "uuid": "1641f1ba-66ac-461a-a2f7-edbddf563d63", 00:08:25.764 "assigned_rate_limits": { 00:08:25.764 "rw_ios_per_sec": 0, 00:08:25.764 "rw_mbytes_per_sec": 0, 00:08:25.764 "r_mbytes_per_sec": 0, 00:08:25.764 "w_mbytes_per_sec": 0 00:08:25.764 }, 00:08:25.764 "claimed": true, 00:08:25.765 "claim_type": "exclusive_write", 00:08:25.765 "zoned": false, 00:08:25.765 "supported_io_types": { 00:08:25.765 "read": true, 00:08:25.765 "write": true, 00:08:25.765 "unmap": true, 00:08:25.765 "flush": true, 00:08:25.765 "reset": true, 00:08:25.765 "nvme_admin": false, 00:08:25.765 "nvme_io": false, 00:08:25.765 "nvme_io_md": false, 00:08:25.765 "write_zeroes": true, 00:08:25.765 "zcopy": true, 00:08:25.765 "get_zone_info": false, 00:08:25.765 "zone_management": false, 00:08:25.765 "zone_append": false, 00:08:25.765 "compare": false, 00:08:25.765 "compare_and_write": false, 00:08:25.765 "abort": true, 00:08:25.765 "seek_hole": false, 00:08:25.765 "seek_data": false, 00:08:25.765 "copy": true, 00:08:25.765 "nvme_iov_md": false 00:08:25.765 }, 00:08:25.765 "memory_domains": [ 00:08:25.765 { 00:08:25.765 "dma_device_id": "system", 00:08:25.765 "dma_device_type": 1 00:08:25.765 }, 00:08:25.765 { 00:08:25.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.765 "dma_device_type": 2 00:08:25.765 } 00:08:25.765 ], 00:08:25.765 "driver_specific": {} 00:08:25.765 } 00:08:25.765 ] 00:08:25.765 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.765 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:25.765 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:25.765 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:25.765 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:25.765 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.765 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:25.765 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.765 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.765 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.765 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.765 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.765 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.765 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.765 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.765 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.765 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.765 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.765 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.765 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.765 "name": "Existed_Raid", 00:08:25.765 "uuid": "e5120b7f-27c2-4e74-bf42-37164b6e5664", 00:08:25.765 "strip_size_kb": 64, 00:08:25.765 "state": "online", 00:08:25.765 "raid_level": "raid0", 00:08:25.765 "superblock": false, 00:08:25.765 "num_base_bdevs": 3, 00:08:25.765 "num_base_bdevs_discovered": 3, 00:08:25.765 "num_base_bdevs_operational": 3, 00:08:25.765 "base_bdevs_list": [ 00:08:25.765 { 00:08:25.765 "name": "BaseBdev1", 00:08:25.765 "uuid": "d04667b5-39f6-47bc-98c6-c955d7b6fdaf", 00:08:25.765 "is_configured": true, 00:08:25.765 "data_offset": 0, 00:08:25.765 "data_size": 65536 00:08:25.765 }, 00:08:25.765 { 00:08:25.765 "name": "BaseBdev2", 00:08:25.765 "uuid": "7fecb13d-ee9a-4fcf-8ba5-3896a8427f3e", 00:08:25.765 "is_configured": true, 00:08:25.765 "data_offset": 0, 00:08:25.765 "data_size": 65536 00:08:25.765 }, 00:08:25.765 { 00:08:25.765 "name": "BaseBdev3", 00:08:25.765 "uuid": "1641f1ba-66ac-461a-a2f7-edbddf563d63", 00:08:25.765 "is_configured": true, 00:08:25.765 "data_offset": 0, 00:08:25.765 "data_size": 65536 00:08:25.765 } 00:08:25.765 ] 00:08:25.765 }' 00:08:25.765 12:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.765 12:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.334 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:26.334 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:26.334 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:26.334 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:26.334 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:26.334 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:26.334 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:26.334 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:26.334 12:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.334 12:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.334 [2024-11-19 12:28:31.327536] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:26.334 12:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.334 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:26.334 "name": "Existed_Raid", 00:08:26.334 "aliases": [ 00:08:26.334 "e5120b7f-27c2-4e74-bf42-37164b6e5664" 00:08:26.334 ], 00:08:26.334 "product_name": "Raid Volume", 00:08:26.334 "block_size": 512, 00:08:26.334 "num_blocks": 196608, 00:08:26.334 "uuid": "e5120b7f-27c2-4e74-bf42-37164b6e5664", 00:08:26.334 "assigned_rate_limits": { 00:08:26.334 "rw_ios_per_sec": 0, 00:08:26.334 "rw_mbytes_per_sec": 0, 00:08:26.334 "r_mbytes_per_sec": 0, 00:08:26.334 "w_mbytes_per_sec": 0 00:08:26.334 }, 00:08:26.334 "claimed": false, 00:08:26.334 "zoned": false, 00:08:26.334 "supported_io_types": { 00:08:26.334 "read": true, 00:08:26.334 "write": true, 00:08:26.334 "unmap": true, 00:08:26.334 "flush": true, 00:08:26.334 "reset": true, 00:08:26.334 "nvme_admin": false, 00:08:26.334 "nvme_io": false, 00:08:26.334 "nvme_io_md": false, 00:08:26.334 "write_zeroes": true, 00:08:26.334 "zcopy": false, 00:08:26.334 "get_zone_info": false, 00:08:26.334 "zone_management": false, 00:08:26.334 "zone_append": false, 00:08:26.334 "compare": false, 00:08:26.334 "compare_and_write": false, 00:08:26.334 "abort": false, 00:08:26.334 "seek_hole": false, 00:08:26.334 "seek_data": false, 00:08:26.334 "copy": false, 00:08:26.334 "nvme_iov_md": false 00:08:26.334 }, 00:08:26.334 "memory_domains": [ 00:08:26.334 { 00:08:26.334 "dma_device_id": "system", 00:08:26.334 "dma_device_type": 1 00:08:26.334 }, 00:08:26.334 { 00:08:26.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.334 "dma_device_type": 2 00:08:26.334 }, 00:08:26.334 { 00:08:26.334 "dma_device_id": "system", 00:08:26.334 "dma_device_type": 1 00:08:26.334 }, 00:08:26.334 { 00:08:26.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.334 "dma_device_type": 2 00:08:26.334 }, 00:08:26.334 { 00:08:26.334 "dma_device_id": "system", 00:08:26.334 "dma_device_type": 1 00:08:26.334 }, 00:08:26.334 { 00:08:26.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.334 "dma_device_type": 2 00:08:26.334 } 00:08:26.334 ], 00:08:26.334 "driver_specific": { 00:08:26.334 "raid": { 00:08:26.334 "uuid": "e5120b7f-27c2-4e74-bf42-37164b6e5664", 00:08:26.334 "strip_size_kb": 64, 00:08:26.334 "state": "online", 00:08:26.334 "raid_level": "raid0", 00:08:26.334 "superblock": false, 00:08:26.334 "num_base_bdevs": 3, 00:08:26.334 "num_base_bdevs_discovered": 3, 00:08:26.334 "num_base_bdevs_operational": 3, 00:08:26.334 "base_bdevs_list": [ 00:08:26.334 { 00:08:26.334 "name": "BaseBdev1", 00:08:26.334 "uuid": "d04667b5-39f6-47bc-98c6-c955d7b6fdaf", 00:08:26.334 "is_configured": true, 00:08:26.334 "data_offset": 0, 00:08:26.334 "data_size": 65536 00:08:26.334 }, 00:08:26.334 { 00:08:26.334 "name": "BaseBdev2", 00:08:26.334 "uuid": "7fecb13d-ee9a-4fcf-8ba5-3896a8427f3e", 00:08:26.334 "is_configured": true, 00:08:26.334 "data_offset": 0, 00:08:26.334 "data_size": 65536 00:08:26.334 }, 00:08:26.334 { 00:08:26.334 "name": "BaseBdev3", 00:08:26.334 "uuid": "1641f1ba-66ac-461a-a2f7-edbddf563d63", 00:08:26.334 "is_configured": true, 00:08:26.334 "data_offset": 0, 00:08:26.334 "data_size": 65536 00:08:26.334 } 00:08:26.334 ] 00:08:26.334 } 00:08:26.334 } 00:08:26.334 }' 00:08:26.335 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:26.335 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:26.335 BaseBdev2 00:08:26.335 BaseBdev3' 00:08:26.335 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.335 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:26.335 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.335 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:26.335 12:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.335 12:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.335 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.335 12:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.335 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.335 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.335 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.335 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.335 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:26.335 12:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.335 12:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.335 12:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.335 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.335 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.335 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.335 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:26.335 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.335 12:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.335 12:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.335 12:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.335 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.335 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.335 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:26.335 12:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.335 12:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.335 [2024-11-19 12:28:31.582896] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:26.335 [2024-11-19 12:28:31.582969] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:26.335 [2024-11-19 12:28:31.583049] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:26.594 12:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.594 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:26.594 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:26.594 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:26.594 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:26.594 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:26.594 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:26.594 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.594 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:26.594 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.594 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.594 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:26.594 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.594 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.594 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.594 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.594 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.594 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.594 12:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.594 12:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.594 12:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.594 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.594 "name": "Existed_Raid", 00:08:26.594 "uuid": "e5120b7f-27c2-4e74-bf42-37164b6e5664", 00:08:26.594 "strip_size_kb": 64, 00:08:26.594 "state": "offline", 00:08:26.594 "raid_level": "raid0", 00:08:26.594 "superblock": false, 00:08:26.594 "num_base_bdevs": 3, 00:08:26.594 "num_base_bdevs_discovered": 2, 00:08:26.594 "num_base_bdevs_operational": 2, 00:08:26.594 "base_bdevs_list": [ 00:08:26.594 { 00:08:26.594 "name": null, 00:08:26.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.594 "is_configured": false, 00:08:26.594 "data_offset": 0, 00:08:26.594 "data_size": 65536 00:08:26.594 }, 00:08:26.594 { 00:08:26.594 "name": "BaseBdev2", 00:08:26.594 "uuid": "7fecb13d-ee9a-4fcf-8ba5-3896a8427f3e", 00:08:26.594 "is_configured": true, 00:08:26.594 "data_offset": 0, 00:08:26.594 "data_size": 65536 00:08:26.594 }, 00:08:26.594 { 00:08:26.594 "name": "BaseBdev3", 00:08:26.594 "uuid": "1641f1ba-66ac-461a-a2f7-edbddf563d63", 00:08:26.594 "is_configured": true, 00:08:26.594 "data_offset": 0, 00:08:26.594 "data_size": 65536 00:08:26.594 } 00:08:26.594 ] 00:08:26.594 }' 00:08:26.594 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.594 12:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.854 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:26.854 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:26.854 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.854 12:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:26.854 12:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.854 12:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.854 12:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.854 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:26.854 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:26.854 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:26.854 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.854 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.854 [2024-11-19 12:28:32.025311] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:26.854 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.854 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:26.854 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:26.854 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.854 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.854 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.854 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:26.854 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.854 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:26.854 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:26.854 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:26.854 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.854 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.854 [2024-11-19 12:28:32.096398] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:26.854 [2024-11-19 12:28:32.096449] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:26.854 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.854 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:26.854 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:27.114 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.114 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.114 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:27.114 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.114 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.114 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:27.114 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:27.114 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:27.114 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:27.114 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:27.114 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:27.114 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.114 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.114 BaseBdev2 00:08:27.114 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.114 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:27.114 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.115 [ 00:08:27.115 { 00:08:27.115 "name": "BaseBdev2", 00:08:27.115 "aliases": [ 00:08:27.115 "f637180c-9c09-4a1a-96d2-4f732640ff68" 00:08:27.115 ], 00:08:27.115 "product_name": "Malloc disk", 00:08:27.115 "block_size": 512, 00:08:27.115 "num_blocks": 65536, 00:08:27.115 "uuid": "f637180c-9c09-4a1a-96d2-4f732640ff68", 00:08:27.115 "assigned_rate_limits": { 00:08:27.115 "rw_ios_per_sec": 0, 00:08:27.115 "rw_mbytes_per_sec": 0, 00:08:27.115 "r_mbytes_per_sec": 0, 00:08:27.115 "w_mbytes_per_sec": 0 00:08:27.115 }, 00:08:27.115 "claimed": false, 00:08:27.115 "zoned": false, 00:08:27.115 "supported_io_types": { 00:08:27.115 "read": true, 00:08:27.115 "write": true, 00:08:27.115 "unmap": true, 00:08:27.115 "flush": true, 00:08:27.115 "reset": true, 00:08:27.115 "nvme_admin": false, 00:08:27.115 "nvme_io": false, 00:08:27.115 "nvme_io_md": false, 00:08:27.115 "write_zeroes": true, 00:08:27.115 "zcopy": true, 00:08:27.115 "get_zone_info": false, 00:08:27.115 "zone_management": false, 00:08:27.115 "zone_append": false, 00:08:27.115 "compare": false, 00:08:27.115 "compare_and_write": false, 00:08:27.115 "abort": true, 00:08:27.115 "seek_hole": false, 00:08:27.115 "seek_data": false, 00:08:27.115 "copy": true, 00:08:27.115 "nvme_iov_md": false 00:08:27.115 }, 00:08:27.115 "memory_domains": [ 00:08:27.115 { 00:08:27.115 "dma_device_id": "system", 00:08:27.115 "dma_device_type": 1 00:08:27.115 }, 00:08:27.115 { 00:08:27.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.115 "dma_device_type": 2 00:08:27.115 } 00:08:27.115 ], 00:08:27.115 "driver_specific": {} 00:08:27.115 } 00:08:27.115 ] 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.115 BaseBdev3 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.115 [ 00:08:27.115 { 00:08:27.115 "name": "BaseBdev3", 00:08:27.115 "aliases": [ 00:08:27.115 "574eaeb8-9e60-4c25-a6c3-628f61385147" 00:08:27.115 ], 00:08:27.115 "product_name": "Malloc disk", 00:08:27.115 "block_size": 512, 00:08:27.115 "num_blocks": 65536, 00:08:27.115 "uuid": "574eaeb8-9e60-4c25-a6c3-628f61385147", 00:08:27.115 "assigned_rate_limits": { 00:08:27.115 "rw_ios_per_sec": 0, 00:08:27.115 "rw_mbytes_per_sec": 0, 00:08:27.115 "r_mbytes_per_sec": 0, 00:08:27.115 "w_mbytes_per_sec": 0 00:08:27.115 }, 00:08:27.115 "claimed": false, 00:08:27.115 "zoned": false, 00:08:27.115 "supported_io_types": { 00:08:27.115 "read": true, 00:08:27.115 "write": true, 00:08:27.115 "unmap": true, 00:08:27.115 "flush": true, 00:08:27.115 "reset": true, 00:08:27.115 "nvme_admin": false, 00:08:27.115 "nvme_io": false, 00:08:27.115 "nvme_io_md": false, 00:08:27.115 "write_zeroes": true, 00:08:27.115 "zcopy": true, 00:08:27.115 "get_zone_info": false, 00:08:27.115 "zone_management": false, 00:08:27.115 "zone_append": false, 00:08:27.115 "compare": false, 00:08:27.115 "compare_and_write": false, 00:08:27.115 "abort": true, 00:08:27.115 "seek_hole": false, 00:08:27.115 "seek_data": false, 00:08:27.115 "copy": true, 00:08:27.115 "nvme_iov_md": false 00:08:27.115 }, 00:08:27.115 "memory_domains": [ 00:08:27.115 { 00:08:27.115 "dma_device_id": "system", 00:08:27.115 "dma_device_type": 1 00:08:27.115 }, 00:08:27.115 { 00:08:27.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.115 "dma_device_type": 2 00:08:27.115 } 00:08:27.115 ], 00:08:27.115 "driver_specific": {} 00:08:27.115 } 00:08:27.115 ] 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.115 [2024-11-19 12:28:32.272220] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:27.115 [2024-11-19 12:28:32.272321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:27.115 [2024-11-19 12:28:32.272377] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:27.115 [2024-11-19 12:28:32.274155] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.115 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.115 "name": "Existed_Raid", 00:08:27.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.115 "strip_size_kb": 64, 00:08:27.115 "state": "configuring", 00:08:27.115 "raid_level": "raid0", 00:08:27.115 "superblock": false, 00:08:27.115 "num_base_bdevs": 3, 00:08:27.115 "num_base_bdevs_discovered": 2, 00:08:27.115 "num_base_bdevs_operational": 3, 00:08:27.115 "base_bdevs_list": [ 00:08:27.115 { 00:08:27.115 "name": "BaseBdev1", 00:08:27.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.115 "is_configured": false, 00:08:27.115 "data_offset": 0, 00:08:27.115 "data_size": 0 00:08:27.115 }, 00:08:27.116 { 00:08:27.116 "name": "BaseBdev2", 00:08:27.116 "uuid": "f637180c-9c09-4a1a-96d2-4f732640ff68", 00:08:27.116 "is_configured": true, 00:08:27.116 "data_offset": 0, 00:08:27.116 "data_size": 65536 00:08:27.116 }, 00:08:27.116 { 00:08:27.116 "name": "BaseBdev3", 00:08:27.116 "uuid": "574eaeb8-9e60-4c25-a6c3-628f61385147", 00:08:27.116 "is_configured": true, 00:08:27.116 "data_offset": 0, 00:08:27.116 "data_size": 65536 00:08:27.116 } 00:08:27.116 ] 00:08:27.116 }' 00:08:27.116 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.116 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.684 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:27.685 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.685 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.685 [2024-11-19 12:28:32.751387] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:27.685 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.685 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:27.685 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.685 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.685 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.685 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.685 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.685 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.685 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.685 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.685 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.685 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.685 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.685 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.685 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.685 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.685 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.685 "name": "Existed_Raid", 00:08:27.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.685 "strip_size_kb": 64, 00:08:27.685 "state": "configuring", 00:08:27.685 "raid_level": "raid0", 00:08:27.685 "superblock": false, 00:08:27.685 "num_base_bdevs": 3, 00:08:27.685 "num_base_bdevs_discovered": 1, 00:08:27.685 "num_base_bdevs_operational": 3, 00:08:27.685 "base_bdevs_list": [ 00:08:27.685 { 00:08:27.685 "name": "BaseBdev1", 00:08:27.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.685 "is_configured": false, 00:08:27.685 "data_offset": 0, 00:08:27.685 "data_size": 0 00:08:27.685 }, 00:08:27.685 { 00:08:27.685 "name": null, 00:08:27.685 "uuid": "f637180c-9c09-4a1a-96d2-4f732640ff68", 00:08:27.685 "is_configured": false, 00:08:27.685 "data_offset": 0, 00:08:27.685 "data_size": 65536 00:08:27.685 }, 00:08:27.685 { 00:08:27.685 "name": "BaseBdev3", 00:08:27.685 "uuid": "574eaeb8-9e60-4c25-a6c3-628f61385147", 00:08:27.685 "is_configured": true, 00:08:27.685 "data_offset": 0, 00:08:27.685 "data_size": 65536 00:08:27.685 } 00:08:27.685 ] 00:08:27.685 }' 00:08:27.685 12:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.685 12:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.945 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.945 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.945 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.945 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:27.945 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.205 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:28.205 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.206 [2024-11-19 12:28:33.229668] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:28.206 BaseBdev1 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.206 [ 00:08:28.206 { 00:08:28.206 "name": "BaseBdev1", 00:08:28.206 "aliases": [ 00:08:28.206 "4a964656-ef89-4adf-a778-31197a89b605" 00:08:28.206 ], 00:08:28.206 "product_name": "Malloc disk", 00:08:28.206 "block_size": 512, 00:08:28.206 "num_blocks": 65536, 00:08:28.206 "uuid": "4a964656-ef89-4adf-a778-31197a89b605", 00:08:28.206 "assigned_rate_limits": { 00:08:28.206 "rw_ios_per_sec": 0, 00:08:28.206 "rw_mbytes_per_sec": 0, 00:08:28.206 "r_mbytes_per_sec": 0, 00:08:28.206 "w_mbytes_per_sec": 0 00:08:28.206 }, 00:08:28.206 "claimed": true, 00:08:28.206 "claim_type": "exclusive_write", 00:08:28.206 "zoned": false, 00:08:28.206 "supported_io_types": { 00:08:28.206 "read": true, 00:08:28.206 "write": true, 00:08:28.206 "unmap": true, 00:08:28.206 "flush": true, 00:08:28.206 "reset": true, 00:08:28.206 "nvme_admin": false, 00:08:28.206 "nvme_io": false, 00:08:28.206 "nvme_io_md": false, 00:08:28.206 "write_zeroes": true, 00:08:28.206 "zcopy": true, 00:08:28.206 "get_zone_info": false, 00:08:28.206 "zone_management": false, 00:08:28.206 "zone_append": false, 00:08:28.206 "compare": false, 00:08:28.206 "compare_and_write": false, 00:08:28.206 "abort": true, 00:08:28.206 "seek_hole": false, 00:08:28.206 "seek_data": false, 00:08:28.206 "copy": true, 00:08:28.206 "nvme_iov_md": false 00:08:28.206 }, 00:08:28.206 "memory_domains": [ 00:08:28.206 { 00:08:28.206 "dma_device_id": "system", 00:08:28.206 "dma_device_type": 1 00:08:28.206 }, 00:08:28.206 { 00:08:28.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.206 "dma_device_type": 2 00:08:28.206 } 00:08:28.206 ], 00:08:28.206 "driver_specific": {} 00:08:28.206 } 00:08:28.206 ] 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.206 "name": "Existed_Raid", 00:08:28.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.206 "strip_size_kb": 64, 00:08:28.206 "state": "configuring", 00:08:28.206 "raid_level": "raid0", 00:08:28.206 "superblock": false, 00:08:28.206 "num_base_bdevs": 3, 00:08:28.206 "num_base_bdevs_discovered": 2, 00:08:28.206 "num_base_bdevs_operational": 3, 00:08:28.206 "base_bdevs_list": [ 00:08:28.206 { 00:08:28.206 "name": "BaseBdev1", 00:08:28.206 "uuid": "4a964656-ef89-4adf-a778-31197a89b605", 00:08:28.206 "is_configured": true, 00:08:28.206 "data_offset": 0, 00:08:28.206 "data_size": 65536 00:08:28.206 }, 00:08:28.206 { 00:08:28.206 "name": null, 00:08:28.206 "uuid": "f637180c-9c09-4a1a-96d2-4f732640ff68", 00:08:28.206 "is_configured": false, 00:08:28.206 "data_offset": 0, 00:08:28.206 "data_size": 65536 00:08:28.206 }, 00:08:28.206 { 00:08:28.206 "name": "BaseBdev3", 00:08:28.206 "uuid": "574eaeb8-9e60-4c25-a6c3-628f61385147", 00:08:28.206 "is_configured": true, 00:08:28.206 "data_offset": 0, 00:08:28.206 "data_size": 65536 00:08:28.206 } 00:08:28.206 ] 00:08:28.206 }' 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.206 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.466 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.466 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.466 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.466 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:28.466 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.466 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:28.466 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:28.466 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.466 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.466 [2024-11-19 12:28:33.720884] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:28.727 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.727 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:28.727 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.727 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.727 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.727 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.727 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.727 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.727 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.727 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.727 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.727 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.727 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.727 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.727 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.727 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.727 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.727 "name": "Existed_Raid", 00:08:28.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.727 "strip_size_kb": 64, 00:08:28.727 "state": "configuring", 00:08:28.727 "raid_level": "raid0", 00:08:28.727 "superblock": false, 00:08:28.727 "num_base_bdevs": 3, 00:08:28.727 "num_base_bdevs_discovered": 1, 00:08:28.727 "num_base_bdevs_operational": 3, 00:08:28.727 "base_bdevs_list": [ 00:08:28.727 { 00:08:28.727 "name": "BaseBdev1", 00:08:28.727 "uuid": "4a964656-ef89-4adf-a778-31197a89b605", 00:08:28.727 "is_configured": true, 00:08:28.727 "data_offset": 0, 00:08:28.727 "data_size": 65536 00:08:28.727 }, 00:08:28.727 { 00:08:28.727 "name": null, 00:08:28.727 "uuid": "f637180c-9c09-4a1a-96d2-4f732640ff68", 00:08:28.727 "is_configured": false, 00:08:28.727 "data_offset": 0, 00:08:28.727 "data_size": 65536 00:08:28.727 }, 00:08:28.727 { 00:08:28.727 "name": null, 00:08:28.727 "uuid": "574eaeb8-9e60-4c25-a6c3-628f61385147", 00:08:28.727 "is_configured": false, 00:08:28.727 "data_offset": 0, 00:08:28.727 "data_size": 65536 00:08:28.727 } 00:08:28.727 ] 00:08:28.727 }' 00:08:28.727 12:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.727 12:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.987 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:28.987 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.987 12:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.987 12:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.987 12:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.987 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:28.987 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:28.987 12:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.987 12:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.987 [2024-11-19 12:28:34.188094] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:28.987 12:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.987 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:28.987 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.987 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.987 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.987 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.987 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.987 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.987 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.987 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.987 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.987 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.987 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.987 12:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.987 12:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.987 12:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.247 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.247 "name": "Existed_Raid", 00:08:29.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.247 "strip_size_kb": 64, 00:08:29.247 "state": "configuring", 00:08:29.247 "raid_level": "raid0", 00:08:29.247 "superblock": false, 00:08:29.247 "num_base_bdevs": 3, 00:08:29.247 "num_base_bdevs_discovered": 2, 00:08:29.247 "num_base_bdevs_operational": 3, 00:08:29.247 "base_bdevs_list": [ 00:08:29.247 { 00:08:29.247 "name": "BaseBdev1", 00:08:29.247 "uuid": "4a964656-ef89-4adf-a778-31197a89b605", 00:08:29.247 "is_configured": true, 00:08:29.247 "data_offset": 0, 00:08:29.247 "data_size": 65536 00:08:29.247 }, 00:08:29.247 { 00:08:29.247 "name": null, 00:08:29.247 "uuid": "f637180c-9c09-4a1a-96d2-4f732640ff68", 00:08:29.247 "is_configured": false, 00:08:29.247 "data_offset": 0, 00:08:29.247 "data_size": 65536 00:08:29.247 }, 00:08:29.247 { 00:08:29.247 "name": "BaseBdev3", 00:08:29.247 "uuid": "574eaeb8-9e60-4c25-a6c3-628f61385147", 00:08:29.247 "is_configured": true, 00:08:29.247 "data_offset": 0, 00:08:29.247 "data_size": 65536 00:08:29.247 } 00:08:29.247 ] 00:08:29.247 }' 00:08:29.247 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.247 12:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.508 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.508 12:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.508 12:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.508 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:29.508 12:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.508 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:29.508 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:29.508 12:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.508 12:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.508 [2024-11-19 12:28:34.723202] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:29.508 12:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.508 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:29.508 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.508 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.508 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:29.508 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.508 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.508 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.508 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.508 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.508 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.508 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.508 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.508 12:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.508 12:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.508 12:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.768 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.768 "name": "Existed_Raid", 00:08:29.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.768 "strip_size_kb": 64, 00:08:29.768 "state": "configuring", 00:08:29.768 "raid_level": "raid0", 00:08:29.768 "superblock": false, 00:08:29.768 "num_base_bdevs": 3, 00:08:29.768 "num_base_bdevs_discovered": 1, 00:08:29.768 "num_base_bdevs_operational": 3, 00:08:29.768 "base_bdevs_list": [ 00:08:29.768 { 00:08:29.768 "name": null, 00:08:29.768 "uuid": "4a964656-ef89-4adf-a778-31197a89b605", 00:08:29.768 "is_configured": false, 00:08:29.768 "data_offset": 0, 00:08:29.768 "data_size": 65536 00:08:29.768 }, 00:08:29.768 { 00:08:29.768 "name": null, 00:08:29.768 "uuid": "f637180c-9c09-4a1a-96d2-4f732640ff68", 00:08:29.768 "is_configured": false, 00:08:29.768 "data_offset": 0, 00:08:29.768 "data_size": 65536 00:08:29.768 }, 00:08:29.768 { 00:08:29.768 "name": "BaseBdev3", 00:08:29.768 "uuid": "574eaeb8-9e60-4c25-a6c3-628f61385147", 00:08:29.768 "is_configured": true, 00:08:29.768 "data_offset": 0, 00:08:29.768 "data_size": 65536 00:08:29.768 } 00:08:29.768 ] 00:08:29.768 }' 00:08:29.768 12:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.768 12:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.028 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:30.028 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.029 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.029 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.029 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.029 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:30.029 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:30.029 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.029 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.029 [2024-11-19 12:28:35.196810] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:30.029 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.029 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:30.029 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.029 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.029 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:30.029 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.029 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.029 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.029 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.029 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.029 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.029 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.029 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.029 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.029 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.029 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.029 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.029 "name": "Existed_Raid", 00:08:30.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.029 "strip_size_kb": 64, 00:08:30.029 "state": "configuring", 00:08:30.029 "raid_level": "raid0", 00:08:30.029 "superblock": false, 00:08:30.029 "num_base_bdevs": 3, 00:08:30.029 "num_base_bdevs_discovered": 2, 00:08:30.029 "num_base_bdevs_operational": 3, 00:08:30.029 "base_bdevs_list": [ 00:08:30.029 { 00:08:30.029 "name": null, 00:08:30.029 "uuid": "4a964656-ef89-4adf-a778-31197a89b605", 00:08:30.029 "is_configured": false, 00:08:30.029 "data_offset": 0, 00:08:30.029 "data_size": 65536 00:08:30.029 }, 00:08:30.029 { 00:08:30.029 "name": "BaseBdev2", 00:08:30.029 "uuid": "f637180c-9c09-4a1a-96d2-4f732640ff68", 00:08:30.029 "is_configured": true, 00:08:30.029 "data_offset": 0, 00:08:30.029 "data_size": 65536 00:08:30.029 }, 00:08:30.029 { 00:08:30.029 "name": "BaseBdev3", 00:08:30.029 "uuid": "574eaeb8-9e60-4c25-a6c3-628f61385147", 00:08:30.029 "is_configured": true, 00:08:30.029 "data_offset": 0, 00:08:30.029 "data_size": 65536 00:08:30.029 } 00:08:30.029 ] 00:08:30.029 }' 00:08:30.029 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.029 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.600 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.600 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:30.600 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.600 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.600 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.600 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:30.600 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.600 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.600 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.600 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:30.600 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.600 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4a964656-ef89-4adf-a778-31197a89b605 00:08:30.600 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.600 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.600 [2024-11-19 12:28:35.714685] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:30.600 [2024-11-19 12:28:35.714755] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:30.600 [2024-11-19 12:28:35.714779] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:30.600 [2024-11-19 12:28:35.715024] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:30.600 [2024-11-19 12:28:35.715142] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:30.600 [2024-11-19 12:28:35.715152] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:30.600 [2024-11-19 12:28:35.715333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.600 NewBaseBdev 00:08:30.600 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.600 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:30.600 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:30.600 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:30.600 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:30.600 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:30.600 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:30.600 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:30.600 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.600 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.600 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.600 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:30.600 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.600 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.600 [ 00:08:30.600 { 00:08:30.600 "name": "NewBaseBdev", 00:08:30.600 "aliases": [ 00:08:30.600 "4a964656-ef89-4adf-a778-31197a89b605" 00:08:30.600 ], 00:08:30.600 "product_name": "Malloc disk", 00:08:30.600 "block_size": 512, 00:08:30.600 "num_blocks": 65536, 00:08:30.600 "uuid": "4a964656-ef89-4adf-a778-31197a89b605", 00:08:30.600 "assigned_rate_limits": { 00:08:30.600 "rw_ios_per_sec": 0, 00:08:30.600 "rw_mbytes_per_sec": 0, 00:08:30.600 "r_mbytes_per_sec": 0, 00:08:30.600 "w_mbytes_per_sec": 0 00:08:30.600 }, 00:08:30.600 "claimed": true, 00:08:30.600 "claim_type": "exclusive_write", 00:08:30.600 "zoned": false, 00:08:30.600 "supported_io_types": { 00:08:30.600 "read": true, 00:08:30.600 "write": true, 00:08:30.600 "unmap": true, 00:08:30.600 "flush": true, 00:08:30.600 "reset": true, 00:08:30.600 "nvme_admin": false, 00:08:30.600 "nvme_io": false, 00:08:30.600 "nvme_io_md": false, 00:08:30.600 "write_zeroes": true, 00:08:30.600 "zcopy": true, 00:08:30.600 "get_zone_info": false, 00:08:30.600 "zone_management": false, 00:08:30.600 "zone_append": false, 00:08:30.600 "compare": false, 00:08:30.600 "compare_and_write": false, 00:08:30.600 "abort": true, 00:08:30.600 "seek_hole": false, 00:08:30.600 "seek_data": false, 00:08:30.600 "copy": true, 00:08:30.600 "nvme_iov_md": false 00:08:30.600 }, 00:08:30.600 "memory_domains": [ 00:08:30.600 { 00:08:30.600 "dma_device_id": "system", 00:08:30.600 "dma_device_type": 1 00:08:30.600 }, 00:08:30.600 { 00:08:30.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.600 "dma_device_type": 2 00:08:30.600 } 00:08:30.600 ], 00:08:30.600 "driver_specific": {} 00:08:30.600 } 00:08:30.600 ] 00:08:30.600 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.600 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:30.600 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:30.601 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.601 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:30.601 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:30.601 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.601 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.601 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.601 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.601 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.601 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.601 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.601 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.601 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.601 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.601 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.601 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.601 "name": "Existed_Raid", 00:08:30.601 "uuid": "a7555100-2086-4813-a9e6-1b46fcad0b75", 00:08:30.601 "strip_size_kb": 64, 00:08:30.601 "state": "online", 00:08:30.601 "raid_level": "raid0", 00:08:30.601 "superblock": false, 00:08:30.601 "num_base_bdevs": 3, 00:08:30.601 "num_base_bdevs_discovered": 3, 00:08:30.601 "num_base_bdevs_operational": 3, 00:08:30.601 "base_bdevs_list": [ 00:08:30.601 { 00:08:30.601 "name": "NewBaseBdev", 00:08:30.601 "uuid": "4a964656-ef89-4adf-a778-31197a89b605", 00:08:30.601 "is_configured": true, 00:08:30.601 "data_offset": 0, 00:08:30.601 "data_size": 65536 00:08:30.601 }, 00:08:30.601 { 00:08:30.601 "name": "BaseBdev2", 00:08:30.601 "uuid": "f637180c-9c09-4a1a-96d2-4f732640ff68", 00:08:30.601 "is_configured": true, 00:08:30.601 "data_offset": 0, 00:08:30.601 "data_size": 65536 00:08:30.601 }, 00:08:30.601 { 00:08:30.601 "name": "BaseBdev3", 00:08:30.601 "uuid": "574eaeb8-9e60-4c25-a6c3-628f61385147", 00:08:30.601 "is_configured": true, 00:08:30.601 "data_offset": 0, 00:08:30.601 "data_size": 65536 00:08:30.601 } 00:08:30.601 ] 00:08:30.601 }' 00:08:30.601 12:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.601 12:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.171 12:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:31.171 12:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:31.171 12:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:31.171 12:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:31.171 12:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:31.171 12:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:31.171 12:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:31.171 12:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:31.171 12:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.171 12:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.171 [2024-11-19 12:28:36.246136] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:31.171 12:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.171 12:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:31.171 "name": "Existed_Raid", 00:08:31.171 "aliases": [ 00:08:31.171 "a7555100-2086-4813-a9e6-1b46fcad0b75" 00:08:31.171 ], 00:08:31.171 "product_name": "Raid Volume", 00:08:31.171 "block_size": 512, 00:08:31.171 "num_blocks": 196608, 00:08:31.171 "uuid": "a7555100-2086-4813-a9e6-1b46fcad0b75", 00:08:31.171 "assigned_rate_limits": { 00:08:31.171 "rw_ios_per_sec": 0, 00:08:31.171 "rw_mbytes_per_sec": 0, 00:08:31.171 "r_mbytes_per_sec": 0, 00:08:31.171 "w_mbytes_per_sec": 0 00:08:31.171 }, 00:08:31.171 "claimed": false, 00:08:31.171 "zoned": false, 00:08:31.171 "supported_io_types": { 00:08:31.171 "read": true, 00:08:31.171 "write": true, 00:08:31.171 "unmap": true, 00:08:31.171 "flush": true, 00:08:31.171 "reset": true, 00:08:31.171 "nvme_admin": false, 00:08:31.171 "nvme_io": false, 00:08:31.171 "nvme_io_md": false, 00:08:31.171 "write_zeroes": true, 00:08:31.171 "zcopy": false, 00:08:31.171 "get_zone_info": false, 00:08:31.171 "zone_management": false, 00:08:31.171 "zone_append": false, 00:08:31.171 "compare": false, 00:08:31.171 "compare_and_write": false, 00:08:31.171 "abort": false, 00:08:31.171 "seek_hole": false, 00:08:31.171 "seek_data": false, 00:08:31.171 "copy": false, 00:08:31.171 "nvme_iov_md": false 00:08:31.171 }, 00:08:31.171 "memory_domains": [ 00:08:31.171 { 00:08:31.171 "dma_device_id": "system", 00:08:31.171 "dma_device_type": 1 00:08:31.171 }, 00:08:31.171 { 00:08:31.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.171 "dma_device_type": 2 00:08:31.171 }, 00:08:31.171 { 00:08:31.171 "dma_device_id": "system", 00:08:31.171 "dma_device_type": 1 00:08:31.171 }, 00:08:31.171 { 00:08:31.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.171 "dma_device_type": 2 00:08:31.171 }, 00:08:31.171 { 00:08:31.171 "dma_device_id": "system", 00:08:31.171 "dma_device_type": 1 00:08:31.171 }, 00:08:31.171 { 00:08:31.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.171 "dma_device_type": 2 00:08:31.171 } 00:08:31.171 ], 00:08:31.171 "driver_specific": { 00:08:31.171 "raid": { 00:08:31.171 "uuid": "a7555100-2086-4813-a9e6-1b46fcad0b75", 00:08:31.171 "strip_size_kb": 64, 00:08:31.171 "state": "online", 00:08:31.171 "raid_level": "raid0", 00:08:31.171 "superblock": false, 00:08:31.171 "num_base_bdevs": 3, 00:08:31.171 "num_base_bdevs_discovered": 3, 00:08:31.171 "num_base_bdevs_operational": 3, 00:08:31.171 "base_bdevs_list": [ 00:08:31.171 { 00:08:31.171 "name": "NewBaseBdev", 00:08:31.171 "uuid": "4a964656-ef89-4adf-a778-31197a89b605", 00:08:31.171 "is_configured": true, 00:08:31.171 "data_offset": 0, 00:08:31.171 "data_size": 65536 00:08:31.171 }, 00:08:31.171 { 00:08:31.171 "name": "BaseBdev2", 00:08:31.171 "uuid": "f637180c-9c09-4a1a-96d2-4f732640ff68", 00:08:31.171 "is_configured": true, 00:08:31.171 "data_offset": 0, 00:08:31.171 "data_size": 65536 00:08:31.171 }, 00:08:31.171 { 00:08:31.171 "name": "BaseBdev3", 00:08:31.171 "uuid": "574eaeb8-9e60-4c25-a6c3-628f61385147", 00:08:31.171 "is_configured": true, 00:08:31.171 "data_offset": 0, 00:08:31.171 "data_size": 65536 00:08:31.171 } 00:08:31.171 ] 00:08:31.171 } 00:08:31.171 } 00:08:31.171 }' 00:08:31.171 12:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:31.171 12:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:31.171 BaseBdev2 00:08:31.171 BaseBdev3' 00:08:31.172 12:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.172 12:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:31.172 12:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.172 12:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:31.172 12:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.172 12:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.172 12:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.172 12:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.172 12:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.172 12:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.172 12:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.172 12:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:31.172 12:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.172 12:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.172 12:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.432 12:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.432 12:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.432 12:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.432 12:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.432 12:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:31.432 12:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.432 12:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.432 12:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.432 12:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.432 12:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.432 12:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.432 12:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:31.432 12:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.432 12:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.432 [2024-11-19 12:28:36.513344] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:31.432 [2024-11-19 12:28:36.513375] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:31.432 [2024-11-19 12:28:36.513450] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:31.432 [2024-11-19 12:28:36.513504] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:31.432 [2024-11-19 12:28:36.513516] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:31.432 12:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.432 12:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 75204 00:08:31.432 12:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 75204 ']' 00:08:31.432 12:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 75204 00:08:31.432 12:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:31.432 12:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:31.433 12:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75204 00:08:31.433 12:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:31.433 12:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:31.433 12:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75204' 00:08:31.433 killing process with pid 75204 00:08:31.433 12:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 75204 00:08:31.433 [2024-11-19 12:28:36.564023] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:31.433 12:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 75204 00:08:31.433 [2024-11-19 12:28:36.595063] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:31.693 00:08:31.693 real 0m8.771s 00:08:31.693 user 0m14.857s 00:08:31.693 sys 0m1.868s 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.693 ************************************ 00:08:31.693 END TEST raid_state_function_test 00:08:31.693 ************************************ 00:08:31.693 12:28:36 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:31.693 12:28:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:31.693 12:28:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.693 12:28:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:31.693 ************************************ 00:08:31.693 START TEST raid_state_function_test_sb 00:08:31.693 ************************************ 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75809 00:08:31.693 Process raid pid: 75809 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75809' 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75809 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 75809 ']' 00:08:31.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:31.693 12:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.954 [2024-11-19 12:28:37.006579] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:31.954 [2024-11-19 12:28:37.006712] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.954 [2024-11-19 12:28:37.167163] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.954 [2024-11-19 12:28:37.212895] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.214 [2024-11-19 12:28:37.255992] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.214 [2024-11-19 12:28:37.256132] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.809 12:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:32.809 12:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:32.809 12:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:32.809 12:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.809 12:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.809 [2024-11-19 12:28:37.885311] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:32.810 [2024-11-19 12:28:37.885379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:32.810 [2024-11-19 12:28:37.885409] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:32.810 [2024-11-19 12:28:37.885421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:32.810 [2024-11-19 12:28:37.885427] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:32.810 [2024-11-19 12:28:37.885438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:32.810 12:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.810 12:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:32.810 12:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.810 12:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.810 12:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.810 12:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.810 12:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.810 12:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.810 12:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.810 12:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.810 12:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.810 12:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.810 12:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.810 12:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.810 12:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.810 12:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.810 12:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.810 "name": "Existed_Raid", 00:08:32.810 "uuid": "b2292121-3eda-410f-8534-af3b8b433c5b", 00:08:32.810 "strip_size_kb": 64, 00:08:32.810 "state": "configuring", 00:08:32.810 "raid_level": "raid0", 00:08:32.810 "superblock": true, 00:08:32.810 "num_base_bdevs": 3, 00:08:32.810 "num_base_bdevs_discovered": 0, 00:08:32.810 "num_base_bdevs_operational": 3, 00:08:32.810 "base_bdevs_list": [ 00:08:32.810 { 00:08:32.810 "name": "BaseBdev1", 00:08:32.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.810 "is_configured": false, 00:08:32.810 "data_offset": 0, 00:08:32.810 "data_size": 0 00:08:32.810 }, 00:08:32.810 { 00:08:32.810 "name": "BaseBdev2", 00:08:32.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.810 "is_configured": false, 00:08:32.810 "data_offset": 0, 00:08:32.810 "data_size": 0 00:08:32.810 }, 00:08:32.810 { 00:08:32.810 "name": "BaseBdev3", 00:08:32.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.810 "is_configured": false, 00:08:32.810 "data_offset": 0, 00:08:32.810 "data_size": 0 00:08:32.810 } 00:08:32.810 ] 00:08:32.810 }' 00:08:32.810 12:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.810 12:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.069 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:33.069 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.069 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.069 [2024-11-19 12:28:38.300520] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:33.069 [2024-11-19 12:28:38.300569] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:33.069 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.069 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:33.069 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.069 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.069 [2024-11-19 12:28:38.308540] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:33.069 [2024-11-19 12:28:38.308634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:33.069 [2024-11-19 12:28:38.308682] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:33.069 [2024-11-19 12:28:38.308707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:33.069 [2024-11-19 12:28:38.308736] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:33.069 [2024-11-19 12:28:38.308789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:33.069 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.069 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:33.069 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.069 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.069 [2024-11-19 12:28:38.325462] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:33.069 BaseBdev1 00:08:33.069 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.069 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:33.069 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:33.328 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:33.328 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:33.328 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:33.329 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:33.329 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:33.329 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.329 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.329 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.329 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:33.329 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.329 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.329 [ 00:08:33.329 { 00:08:33.329 "name": "BaseBdev1", 00:08:33.329 "aliases": [ 00:08:33.329 "5ce9c406-e56a-4b85-8483-67f57c113212" 00:08:33.329 ], 00:08:33.329 "product_name": "Malloc disk", 00:08:33.329 "block_size": 512, 00:08:33.329 "num_blocks": 65536, 00:08:33.329 "uuid": "5ce9c406-e56a-4b85-8483-67f57c113212", 00:08:33.329 "assigned_rate_limits": { 00:08:33.329 "rw_ios_per_sec": 0, 00:08:33.329 "rw_mbytes_per_sec": 0, 00:08:33.329 "r_mbytes_per_sec": 0, 00:08:33.329 "w_mbytes_per_sec": 0 00:08:33.329 }, 00:08:33.329 "claimed": true, 00:08:33.329 "claim_type": "exclusive_write", 00:08:33.329 "zoned": false, 00:08:33.329 "supported_io_types": { 00:08:33.329 "read": true, 00:08:33.329 "write": true, 00:08:33.329 "unmap": true, 00:08:33.329 "flush": true, 00:08:33.329 "reset": true, 00:08:33.329 "nvme_admin": false, 00:08:33.329 "nvme_io": false, 00:08:33.329 "nvme_io_md": false, 00:08:33.329 "write_zeroes": true, 00:08:33.329 "zcopy": true, 00:08:33.329 "get_zone_info": false, 00:08:33.329 "zone_management": false, 00:08:33.329 "zone_append": false, 00:08:33.329 "compare": false, 00:08:33.329 "compare_and_write": false, 00:08:33.329 "abort": true, 00:08:33.329 "seek_hole": false, 00:08:33.329 "seek_data": false, 00:08:33.329 "copy": true, 00:08:33.329 "nvme_iov_md": false 00:08:33.329 }, 00:08:33.329 "memory_domains": [ 00:08:33.329 { 00:08:33.329 "dma_device_id": "system", 00:08:33.329 "dma_device_type": 1 00:08:33.329 }, 00:08:33.329 { 00:08:33.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.329 "dma_device_type": 2 00:08:33.329 } 00:08:33.329 ], 00:08:33.329 "driver_specific": {} 00:08:33.329 } 00:08:33.329 ] 00:08:33.329 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.329 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:33.329 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:33.329 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.329 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.329 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.329 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.329 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.329 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.329 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.329 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.329 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.329 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.329 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.329 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.329 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.329 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.329 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.329 "name": "Existed_Raid", 00:08:33.329 "uuid": "989d39b7-11f7-47c8-9217-21c8a62a10b0", 00:08:33.329 "strip_size_kb": 64, 00:08:33.329 "state": "configuring", 00:08:33.329 "raid_level": "raid0", 00:08:33.329 "superblock": true, 00:08:33.329 "num_base_bdevs": 3, 00:08:33.329 "num_base_bdevs_discovered": 1, 00:08:33.329 "num_base_bdevs_operational": 3, 00:08:33.329 "base_bdevs_list": [ 00:08:33.329 { 00:08:33.329 "name": "BaseBdev1", 00:08:33.329 "uuid": "5ce9c406-e56a-4b85-8483-67f57c113212", 00:08:33.329 "is_configured": true, 00:08:33.329 "data_offset": 2048, 00:08:33.329 "data_size": 63488 00:08:33.329 }, 00:08:33.329 { 00:08:33.329 "name": "BaseBdev2", 00:08:33.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.329 "is_configured": false, 00:08:33.329 "data_offset": 0, 00:08:33.329 "data_size": 0 00:08:33.329 }, 00:08:33.329 { 00:08:33.329 "name": "BaseBdev3", 00:08:33.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.329 "is_configured": false, 00:08:33.329 "data_offset": 0, 00:08:33.329 "data_size": 0 00:08:33.329 } 00:08:33.329 ] 00:08:33.329 }' 00:08:33.329 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.329 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.589 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:33.589 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.589 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.589 [2024-11-19 12:28:38.804676] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:33.589 [2024-11-19 12:28:38.804846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:33.589 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.589 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:33.589 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.589 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.589 [2024-11-19 12:28:38.812689] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:33.589 [2024-11-19 12:28:38.814532] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:33.589 [2024-11-19 12:28:38.814577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:33.589 [2024-11-19 12:28:38.814587] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:33.589 [2024-11-19 12:28:38.814597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:33.589 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.589 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:33.589 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:33.589 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:33.589 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.589 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.589 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.589 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.589 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.589 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.589 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.589 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.589 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.589 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.589 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.589 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.589 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.589 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.850 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.850 "name": "Existed_Raid", 00:08:33.850 "uuid": "200bee8e-2b6c-4619-b930-e584b0f1f450", 00:08:33.850 "strip_size_kb": 64, 00:08:33.850 "state": "configuring", 00:08:33.850 "raid_level": "raid0", 00:08:33.850 "superblock": true, 00:08:33.851 "num_base_bdevs": 3, 00:08:33.851 "num_base_bdevs_discovered": 1, 00:08:33.851 "num_base_bdevs_operational": 3, 00:08:33.851 "base_bdevs_list": [ 00:08:33.851 { 00:08:33.851 "name": "BaseBdev1", 00:08:33.851 "uuid": "5ce9c406-e56a-4b85-8483-67f57c113212", 00:08:33.851 "is_configured": true, 00:08:33.851 "data_offset": 2048, 00:08:33.851 "data_size": 63488 00:08:33.851 }, 00:08:33.851 { 00:08:33.851 "name": "BaseBdev2", 00:08:33.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.851 "is_configured": false, 00:08:33.851 "data_offset": 0, 00:08:33.851 "data_size": 0 00:08:33.851 }, 00:08:33.851 { 00:08:33.851 "name": "BaseBdev3", 00:08:33.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.851 "is_configured": false, 00:08:33.851 "data_offset": 0, 00:08:33.851 "data_size": 0 00:08:33.851 } 00:08:33.851 ] 00:08:33.851 }' 00:08:33.851 12:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.851 12:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.117 [2024-11-19 12:28:39.294062] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:34.117 BaseBdev2 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.117 [ 00:08:34.117 { 00:08:34.117 "name": "BaseBdev2", 00:08:34.117 "aliases": [ 00:08:34.117 "ded2b27d-eb63-493c-81e6-45b721bc8e02" 00:08:34.117 ], 00:08:34.117 "product_name": "Malloc disk", 00:08:34.117 "block_size": 512, 00:08:34.117 "num_blocks": 65536, 00:08:34.117 "uuid": "ded2b27d-eb63-493c-81e6-45b721bc8e02", 00:08:34.117 "assigned_rate_limits": { 00:08:34.117 "rw_ios_per_sec": 0, 00:08:34.117 "rw_mbytes_per_sec": 0, 00:08:34.117 "r_mbytes_per_sec": 0, 00:08:34.117 "w_mbytes_per_sec": 0 00:08:34.117 }, 00:08:34.117 "claimed": true, 00:08:34.117 "claim_type": "exclusive_write", 00:08:34.117 "zoned": false, 00:08:34.117 "supported_io_types": { 00:08:34.117 "read": true, 00:08:34.117 "write": true, 00:08:34.117 "unmap": true, 00:08:34.117 "flush": true, 00:08:34.117 "reset": true, 00:08:34.117 "nvme_admin": false, 00:08:34.117 "nvme_io": false, 00:08:34.117 "nvme_io_md": false, 00:08:34.117 "write_zeroes": true, 00:08:34.117 "zcopy": true, 00:08:34.117 "get_zone_info": false, 00:08:34.117 "zone_management": false, 00:08:34.117 "zone_append": false, 00:08:34.117 "compare": false, 00:08:34.117 "compare_and_write": false, 00:08:34.117 "abort": true, 00:08:34.117 "seek_hole": false, 00:08:34.117 "seek_data": false, 00:08:34.117 "copy": true, 00:08:34.117 "nvme_iov_md": false 00:08:34.117 }, 00:08:34.117 "memory_domains": [ 00:08:34.117 { 00:08:34.117 "dma_device_id": "system", 00:08:34.117 "dma_device_type": 1 00:08:34.117 }, 00:08:34.117 { 00:08:34.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.117 "dma_device_type": 2 00:08:34.117 } 00:08:34.117 ], 00:08:34.117 "driver_specific": {} 00:08:34.117 } 00:08:34.117 ] 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.117 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.376 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.376 "name": "Existed_Raid", 00:08:34.376 "uuid": "200bee8e-2b6c-4619-b930-e584b0f1f450", 00:08:34.376 "strip_size_kb": 64, 00:08:34.376 "state": "configuring", 00:08:34.376 "raid_level": "raid0", 00:08:34.376 "superblock": true, 00:08:34.376 "num_base_bdevs": 3, 00:08:34.376 "num_base_bdevs_discovered": 2, 00:08:34.376 "num_base_bdevs_operational": 3, 00:08:34.376 "base_bdevs_list": [ 00:08:34.376 { 00:08:34.376 "name": "BaseBdev1", 00:08:34.376 "uuid": "5ce9c406-e56a-4b85-8483-67f57c113212", 00:08:34.376 "is_configured": true, 00:08:34.376 "data_offset": 2048, 00:08:34.376 "data_size": 63488 00:08:34.376 }, 00:08:34.376 { 00:08:34.376 "name": "BaseBdev2", 00:08:34.376 "uuid": "ded2b27d-eb63-493c-81e6-45b721bc8e02", 00:08:34.376 "is_configured": true, 00:08:34.376 "data_offset": 2048, 00:08:34.376 "data_size": 63488 00:08:34.376 }, 00:08:34.376 { 00:08:34.376 "name": "BaseBdev3", 00:08:34.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.376 "is_configured": false, 00:08:34.376 "data_offset": 0, 00:08:34.376 "data_size": 0 00:08:34.376 } 00:08:34.376 ] 00:08:34.376 }' 00:08:34.376 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.376 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.637 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:34.637 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.637 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.637 [2024-11-19 12:28:39.804457] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:34.637 BaseBdev3 00:08:34.637 [2024-11-19 12:28:39.804775] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:34.637 [2024-11-19 12:28:39.804800] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:34.637 [2024-11-19 12:28:39.805105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:34.637 [2024-11-19 12:28:39.805231] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:34.637 [2024-11-19 12:28:39.805241] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:34.637 [2024-11-19 12:28:39.805356] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.637 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.637 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:34.637 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:34.637 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:34.637 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:34.637 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:34.637 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:34.637 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:34.637 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.637 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.637 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.637 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:34.637 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.637 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.637 [ 00:08:34.637 { 00:08:34.637 "name": "BaseBdev3", 00:08:34.637 "aliases": [ 00:08:34.637 "42f01254-e8bf-419d-8d1e-56807c54858d" 00:08:34.637 ], 00:08:34.637 "product_name": "Malloc disk", 00:08:34.637 "block_size": 512, 00:08:34.637 "num_blocks": 65536, 00:08:34.637 "uuid": "42f01254-e8bf-419d-8d1e-56807c54858d", 00:08:34.637 "assigned_rate_limits": { 00:08:34.637 "rw_ios_per_sec": 0, 00:08:34.637 "rw_mbytes_per_sec": 0, 00:08:34.637 "r_mbytes_per_sec": 0, 00:08:34.637 "w_mbytes_per_sec": 0 00:08:34.637 }, 00:08:34.637 "claimed": true, 00:08:34.637 "claim_type": "exclusive_write", 00:08:34.637 "zoned": false, 00:08:34.637 "supported_io_types": { 00:08:34.637 "read": true, 00:08:34.637 "write": true, 00:08:34.637 "unmap": true, 00:08:34.637 "flush": true, 00:08:34.637 "reset": true, 00:08:34.637 "nvme_admin": false, 00:08:34.637 "nvme_io": false, 00:08:34.637 "nvme_io_md": false, 00:08:34.637 "write_zeroes": true, 00:08:34.637 "zcopy": true, 00:08:34.637 "get_zone_info": false, 00:08:34.637 "zone_management": false, 00:08:34.637 "zone_append": false, 00:08:34.637 "compare": false, 00:08:34.637 "compare_and_write": false, 00:08:34.637 "abort": true, 00:08:34.637 "seek_hole": false, 00:08:34.637 "seek_data": false, 00:08:34.637 "copy": true, 00:08:34.637 "nvme_iov_md": false 00:08:34.637 }, 00:08:34.637 "memory_domains": [ 00:08:34.637 { 00:08:34.637 "dma_device_id": "system", 00:08:34.637 "dma_device_type": 1 00:08:34.637 }, 00:08:34.637 { 00:08:34.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.637 "dma_device_type": 2 00:08:34.637 } 00:08:34.637 ], 00:08:34.637 "driver_specific": {} 00:08:34.637 } 00:08:34.637 ] 00:08:34.637 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.637 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:34.637 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:34.637 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:34.637 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:34.637 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.637 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:34.637 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.637 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.637 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.637 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.637 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.637 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.638 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.638 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.638 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.638 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.638 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.638 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.898 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.898 "name": "Existed_Raid", 00:08:34.898 "uuid": "200bee8e-2b6c-4619-b930-e584b0f1f450", 00:08:34.898 "strip_size_kb": 64, 00:08:34.898 "state": "online", 00:08:34.898 "raid_level": "raid0", 00:08:34.898 "superblock": true, 00:08:34.898 "num_base_bdevs": 3, 00:08:34.898 "num_base_bdevs_discovered": 3, 00:08:34.898 "num_base_bdevs_operational": 3, 00:08:34.898 "base_bdevs_list": [ 00:08:34.898 { 00:08:34.898 "name": "BaseBdev1", 00:08:34.898 "uuid": "5ce9c406-e56a-4b85-8483-67f57c113212", 00:08:34.898 "is_configured": true, 00:08:34.898 "data_offset": 2048, 00:08:34.898 "data_size": 63488 00:08:34.898 }, 00:08:34.898 { 00:08:34.898 "name": "BaseBdev2", 00:08:34.898 "uuid": "ded2b27d-eb63-493c-81e6-45b721bc8e02", 00:08:34.898 "is_configured": true, 00:08:34.898 "data_offset": 2048, 00:08:34.898 "data_size": 63488 00:08:34.898 }, 00:08:34.898 { 00:08:34.898 "name": "BaseBdev3", 00:08:34.898 "uuid": "42f01254-e8bf-419d-8d1e-56807c54858d", 00:08:34.898 "is_configured": true, 00:08:34.898 "data_offset": 2048, 00:08:34.898 "data_size": 63488 00:08:34.898 } 00:08:34.898 ] 00:08:34.898 }' 00:08:34.898 12:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.898 12:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.159 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:35.159 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:35.159 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:35.159 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:35.159 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:35.159 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:35.159 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:35.159 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:35.159 12:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.159 12:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.159 [2024-11-19 12:28:40.280016] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:35.159 12:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.159 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:35.159 "name": "Existed_Raid", 00:08:35.159 "aliases": [ 00:08:35.159 "200bee8e-2b6c-4619-b930-e584b0f1f450" 00:08:35.159 ], 00:08:35.159 "product_name": "Raid Volume", 00:08:35.159 "block_size": 512, 00:08:35.159 "num_blocks": 190464, 00:08:35.159 "uuid": "200bee8e-2b6c-4619-b930-e584b0f1f450", 00:08:35.159 "assigned_rate_limits": { 00:08:35.159 "rw_ios_per_sec": 0, 00:08:35.159 "rw_mbytes_per_sec": 0, 00:08:35.159 "r_mbytes_per_sec": 0, 00:08:35.159 "w_mbytes_per_sec": 0 00:08:35.159 }, 00:08:35.159 "claimed": false, 00:08:35.159 "zoned": false, 00:08:35.159 "supported_io_types": { 00:08:35.159 "read": true, 00:08:35.159 "write": true, 00:08:35.159 "unmap": true, 00:08:35.159 "flush": true, 00:08:35.159 "reset": true, 00:08:35.159 "nvme_admin": false, 00:08:35.159 "nvme_io": false, 00:08:35.159 "nvme_io_md": false, 00:08:35.159 "write_zeroes": true, 00:08:35.159 "zcopy": false, 00:08:35.159 "get_zone_info": false, 00:08:35.159 "zone_management": false, 00:08:35.159 "zone_append": false, 00:08:35.159 "compare": false, 00:08:35.159 "compare_and_write": false, 00:08:35.159 "abort": false, 00:08:35.159 "seek_hole": false, 00:08:35.159 "seek_data": false, 00:08:35.159 "copy": false, 00:08:35.159 "nvme_iov_md": false 00:08:35.159 }, 00:08:35.159 "memory_domains": [ 00:08:35.159 { 00:08:35.159 "dma_device_id": "system", 00:08:35.159 "dma_device_type": 1 00:08:35.159 }, 00:08:35.159 { 00:08:35.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.159 "dma_device_type": 2 00:08:35.159 }, 00:08:35.159 { 00:08:35.159 "dma_device_id": "system", 00:08:35.159 "dma_device_type": 1 00:08:35.159 }, 00:08:35.159 { 00:08:35.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.159 "dma_device_type": 2 00:08:35.159 }, 00:08:35.159 { 00:08:35.159 "dma_device_id": "system", 00:08:35.159 "dma_device_type": 1 00:08:35.159 }, 00:08:35.159 { 00:08:35.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.159 "dma_device_type": 2 00:08:35.159 } 00:08:35.159 ], 00:08:35.159 "driver_specific": { 00:08:35.159 "raid": { 00:08:35.159 "uuid": "200bee8e-2b6c-4619-b930-e584b0f1f450", 00:08:35.159 "strip_size_kb": 64, 00:08:35.159 "state": "online", 00:08:35.159 "raid_level": "raid0", 00:08:35.159 "superblock": true, 00:08:35.159 "num_base_bdevs": 3, 00:08:35.159 "num_base_bdevs_discovered": 3, 00:08:35.159 "num_base_bdevs_operational": 3, 00:08:35.159 "base_bdevs_list": [ 00:08:35.159 { 00:08:35.159 "name": "BaseBdev1", 00:08:35.159 "uuid": "5ce9c406-e56a-4b85-8483-67f57c113212", 00:08:35.159 "is_configured": true, 00:08:35.159 "data_offset": 2048, 00:08:35.159 "data_size": 63488 00:08:35.159 }, 00:08:35.159 { 00:08:35.159 "name": "BaseBdev2", 00:08:35.159 "uuid": "ded2b27d-eb63-493c-81e6-45b721bc8e02", 00:08:35.159 "is_configured": true, 00:08:35.159 "data_offset": 2048, 00:08:35.159 "data_size": 63488 00:08:35.159 }, 00:08:35.159 { 00:08:35.159 "name": "BaseBdev3", 00:08:35.159 "uuid": "42f01254-e8bf-419d-8d1e-56807c54858d", 00:08:35.159 "is_configured": true, 00:08:35.159 "data_offset": 2048, 00:08:35.159 "data_size": 63488 00:08:35.159 } 00:08:35.159 ] 00:08:35.159 } 00:08:35.159 } 00:08:35.159 }' 00:08:35.159 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:35.159 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:35.159 BaseBdev2 00:08:35.159 BaseBdev3' 00:08:35.159 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.159 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:35.159 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.159 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:35.159 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.160 12:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.160 12:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.160 12:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.160 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.160 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.160 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.160 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:35.160 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.160 12:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.160 12:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.420 12:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.420 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.420 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.420 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.420 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.420 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:35.420 12:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.420 12:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.420 12:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.420 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.420 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.420 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:35.420 12:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.420 12:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.420 [2024-11-19 12:28:40.511333] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:35.420 [2024-11-19 12:28:40.511372] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:35.420 [2024-11-19 12:28:40.511431] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:35.420 12:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.420 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:35.420 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:35.421 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:35.421 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:35.421 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:35.421 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:35.421 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.421 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:35.421 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.421 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.421 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:35.421 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.421 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.421 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.421 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.421 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.421 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.421 12:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.421 12:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.421 12:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.421 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.421 "name": "Existed_Raid", 00:08:35.421 "uuid": "200bee8e-2b6c-4619-b930-e584b0f1f450", 00:08:35.421 "strip_size_kb": 64, 00:08:35.421 "state": "offline", 00:08:35.421 "raid_level": "raid0", 00:08:35.421 "superblock": true, 00:08:35.421 "num_base_bdevs": 3, 00:08:35.421 "num_base_bdevs_discovered": 2, 00:08:35.421 "num_base_bdevs_operational": 2, 00:08:35.421 "base_bdevs_list": [ 00:08:35.421 { 00:08:35.421 "name": null, 00:08:35.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.421 "is_configured": false, 00:08:35.421 "data_offset": 0, 00:08:35.421 "data_size": 63488 00:08:35.421 }, 00:08:35.421 { 00:08:35.421 "name": "BaseBdev2", 00:08:35.421 "uuid": "ded2b27d-eb63-493c-81e6-45b721bc8e02", 00:08:35.421 "is_configured": true, 00:08:35.421 "data_offset": 2048, 00:08:35.421 "data_size": 63488 00:08:35.421 }, 00:08:35.421 { 00:08:35.421 "name": "BaseBdev3", 00:08:35.421 "uuid": "42f01254-e8bf-419d-8d1e-56807c54858d", 00:08:35.421 "is_configured": true, 00:08:35.421 "data_offset": 2048, 00:08:35.421 "data_size": 63488 00:08:35.421 } 00:08:35.421 ] 00:08:35.421 }' 00:08:35.421 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.421 12:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.681 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:35.681 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:35.681 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:35.681 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.681 12:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.681 12:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.681 12:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.681 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:35.681 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:35.681 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:35.681 12:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.681 12:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.681 [2024-11-19 12:28:40.933879] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:35.941 12:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.941 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:35.941 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:35.941 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.941 12:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.941 12:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.941 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:35.941 12:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.941 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:35.941 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:35.941 12:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:35.941 12:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.941 12:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.941 [2024-11-19 12:28:41.005077] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:35.941 [2024-11-19 12:28:41.005133] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:35.941 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.941 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:35.941 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.942 BaseBdev2 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.942 [ 00:08:35.942 { 00:08:35.942 "name": "BaseBdev2", 00:08:35.942 "aliases": [ 00:08:35.942 "68d481d8-7399-409d-98aa-9ffbe57c6b35" 00:08:35.942 ], 00:08:35.942 "product_name": "Malloc disk", 00:08:35.942 "block_size": 512, 00:08:35.942 "num_blocks": 65536, 00:08:35.942 "uuid": "68d481d8-7399-409d-98aa-9ffbe57c6b35", 00:08:35.942 "assigned_rate_limits": { 00:08:35.942 "rw_ios_per_sec": 0, 00:08:35.942 "rw_mbytes_per_sec": 0, 00:08:35.942 "r_mbytes_per_sec": 0, 00:08:35.942 "w_mbytes_per_sec": 0 00:08:35.942 }, 00:08:35.942 "claimed": false, 00:08:35.942 "zoned": false, 00:08:35.942 "supported_io_types": { 00:08:35.942 "read": true, 00:08:35.942 "write": true, 00:08:35.942 "unmap": true, 00:08:35.942 "flush": true, 00:08:35.942 "reset": true, 00:08:35.942 "nvme_admin": false, 00:08:35.942 "nvme_io": false, 00:08:35.942 "nvme_io_md": false, 00:08:35.942 "write_zeroes": true, 00:08:35.942 "zcopy": true, 00:08:35.942 "get_zone_info": false, 00:08:35.942 "zone_management": false, 00:08:35.942 "zone_append": false, 00:08:35.942 "compare": false, 00:08:35.942 "compare_and_write": false, 00:08:35.942 "abort": true, 00:08:35.942 "seek_hole": false, 00:08:35.942 "seek_data": false, 00:08:35.942 "copy": true, 00:08:35.942 "nvme_iov_md": false 00:08:35.942 }, 00:08:35.942 "memory_domains": [ 00:08:35.942 { 00:08:35.942 "dma_device_id": "system", 00:08:35.942 "dma_device_type": 1 00:08:35.942 }, 00:08:35.942 { 00:08:35.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.942 "dma_device_type": 2 00:08:35.942 } 00:08:35.942 ], 00:08:35.942 "driver_specific": {} 00:08:35.942 } 00:08:35.942 ] 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.942 BaseBdev3 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.942 [ 00:08:35.942 { 00:08:35.942 "name": "BaseBdev3", 00:08:35.942 "aliases": [ 00:08:35.942 "ce0d913d-2317-4a2d-b438-b1edd26b224d" 00:08:35.942 ], 00:08:35.942 "product_name": "Malloc disk", 00:08:35.942 "block_size": 512, 00:08:35.942 "num_blocks": 65536, 00:08:35.942 "uuid": "ce0d913d-2317-4a2d-b438-b1edd26b224d", 00:08:35.942 "assigned_rate_limits": { 00:08:35.942 "rw_ios_per_sec": 0, 00:08:35.942 "rw_mbytes_per_sec": 0, 00:08:35.942 "r_mbytes_per_sec": 0, 00:08:35.942 "w_mbytes_per_sec": 0 00:08:35.942 }, 00:08:35.942 "claimed": false, 00:08:35.942 "zoned": false, 00:08:35.942 "supported_io_types": { 00:08:35.942 "read": true, 00:08:35.942 "write": true, 00:08:35.942 "unmap": true, 00:08:35.942 "flush": true, 00:08:35.942 "reset": true, 00:08:35.942 "nvme_admin": false, 00:08:35.942 "nvme_io": false, 00:08:35.942 "nvme_io_md": false, 00:08:35.942 "write_zeroes": true, 00:08:35.942 "zcopy": true, 00:08:35.942 "get_zone_info": false, 00:08:35.942 "zone_management": false, 00:08:35.942 "zone_append": false, 00:08:35.942 "compare": false, 00:08:35.942 "compare_and_write": false, 00:08:35.942 "abort": true, 00:08:35.942 "seek_hole": false, 00:08:35.942 "seek_data": false, 00:08:35.942 "copy": true, 00:08:35.942 "nvme_iov_md": false 00:08:35.942 }, 00:08:35.942 "memory_domains": [ 00:08:35.942 { 00:08:35.942 "dma_device_id": "system", 00:08:35.942 "dma_device_type": 1 00:08:35.942 }, 00:08:35.942 { 00:08:35.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.942 "dma_device_type": 2 00:08:35.942 } 00:08:35.942 ], 00:08:35.942 "driver_specific": {} 00:08:35.942 } 00:08:35.942 ] 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.942 [2024-11-19 12:28:41.184567] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:35.942 [2024-11-19 12:28:41.184709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:35.942 [2024-11-19 12:28:41.184758] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:35.942 [2024-11-19 12:28:41.186550] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.942 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.943 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.943 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.943 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.943 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.943 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.943 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.943 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.943 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.202 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.202 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.202 "name": "Existed_Raid", 00:08:36.202 "uuid": "bbf50634-7885-4aa2-a85e-fb9dd44894cd", 00:08:36.202 "strip_size_kb": 64, 00:08:36.202 "state": "configuring", 00:08:36.202 "raid_level": "raid0", 00:08:36.202 "superblock": true, 00:08:36.202 "num_base_bdevs": 3, 00:08:36.202 "num_base_bdevs_discovered": 2, 00:08:36.202 "num_base_bdevs_operational": 3, 00:08:36.202 "base_bdevs_list": [ 00:08:36.202 { 00:08:36.202 "name": "BaseBdev1", 00:08:36.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.202 "is_configured": false, 00:08:36.202 "data_offset": 0, 00:08:36.202 "data_size": 0 00:08:36.202 }, 00:08:36.202 { 00:08:36.202 "name": "BaseBdev2", 00:08:36.202 "uuid": "68d481d8-7399-409d-98aa-9ffbe57c6b35", 00:08:36.202 "is_configured": true, 00:08:36.202 "data_offset": 2048, 00:08:36.202 "data_size": 63488 00:08:36.202 }, 00:08:36.202 { 00:08:36.202 "name": "BaseBdev3", 00:08:36.202 "uuid": "ce0d913d-2317-4a2d-b438-b1edd26b224d", 00:08:36.202 "is_configured": true, 00:08:36.202 "data_offset": 2048, 00:08:36.202 "data_size": 63488 00:08:36.202 } 00:08:36.202 ] 00:08:36.202 }' 00:08:36.202 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.202 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.462 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:36.462 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.462 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.462 [2024-11-19 12:28:41.627890] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:36.462 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.462 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:36.462 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.462 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.462 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.462 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.462 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.462 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.462 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.462 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.462 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.462 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.462 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.462 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.462 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.462 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.462 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.462 "name": "Existed_Raid", 00:08:36.462 "uuid": "bbf50634-7885-4aa2-a85e-fb9dd44894cd", 00:08:36.462 "strip_size_kb": 64, 00:08:36.462 "state": "configuring", 00:08:36.462 "raid_level": "raid0", 00:08:36.462 "superblock": true, 00:08:36.462 "num_base_bdevs": 3, 00:08:36.462 "num_base_bdevs_discovered": 1, 00:08:36.462 "num_base_bdevs_operational": 3, 00:08:36.462 "base_bdevs_list": [ 00:08:36.462 { 00:08:36.462 "name": "BaseBdev1", 00:08:36.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.462 "is_configured": false, 00:08:36.462 "data_offset": 0, 00:08:36.462 "data_size": 0 00:08:36.462 }, 00:08:36.462 { 00:08:36.462 "name": null, 00:08:36.462 "uuid": "68d481d8-7399-409d-98aa-9ffbe57c6b35", 00:08:36.462 "is_configured": false, 00:08:36.462 "data_offset": 0, 00:08:36.462 "data_size": 63488 00:08:36.462 }, 00:08:36.462 { 00:08:36.462 "name": "BaseBdev3", 00:08:36.462 "uuid": "ce0d913d-2317-4a2d-b438-b1edd26b224d", 00:08:36.462 "is_configured": true, 00:08:36.462 "data_offset": 2048, 00:08:36.462 "data_size": 63488 00:08:36.462 } 00:08:36.462 ] 00:08:36.462 }' 00:08:36.462 12:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.462 12:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.032 [2024-11-19 12:28:42.089918] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:37.032 BaseBdev1 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.032 [ 00:08:37.032 { 00:08:37.032 "name": "BaseBdev1", 00:08:37.032 "aliases": [ 00:08:37.032 "b678ec44-6453-4447-971d-537f84ed6b39" 00:08:37.032 ], 00:08:37.032 "product_name": "Malloc disk", 00:08:37.032 "block_size": 512, 00:08:37.032 "num_blocks": 65536, 00:08:37.032 "uuid": "b678ec44-6453-4447-971d-537f84ed6b39", 00:08:37.032 "assigned_rate_limits": { 00:08:37.032 "rw_ios_per_sec": 0, 00:08:37.032 "rw_mbytes_per_sec": 0, 00:08:37.032 "r_mbytes_per_sec": 0, 00:08:37.032 "w_mbytes_per_sec": 0 00:08:37.032 }, 00:08:37.032 "claimed": true, 00:08:37.032 "claim_type": "exclusive_write", 00:08:37.032 "zoned": false, 00:08:37.032 "supported_io_types": { 00:08:37.032 "read": true, 00:08:37.032 "write": true, 00:08:37.032 "unmap": true, 00:08:37.032 "flush": true, 00:08:37.032 "reset": true, 00:08:37.032 "nvme_admin": false, 00:08:37.032 "nvme_io": false, 00:08:37.032 "nvme_io_md": false, 00:08:37.032 "write_zeroes": true, 00:08:37.032 "zcopy": true, 00:08:37.032 "get_zone_info": false, 00:08:37.032 "zone_management": false, 00:08:37.032 "zone_append": false, 00:08:37.032 "compare": false, 00:08:37.032 "compare_and_write": false, 00:08:37.032 "abort": true, 00:08:37.032 "seek_hole": false, 00:08:37.032 "seek_data": false, 00:08:37.032 "copy": true, 00:08:37.032 "nvme_iov_md": false 00:08:37.032 }, 00:08:37.032 "memory_domains": [ 00:08:37.032 { 00:08:37.032 "dma_device_id": "system", 00:08:37.032 "dma_device_type": 1 00:08:37.032 }, 00:08:37.032 { 00:08:37.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.032 "dma_device_type": 2 00:08:37.032 } 00:08:37.032 ], 00:08:37.032 "driver_specific": {} 00:08:37.032 } 00:08:37.032 ] 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.032 "name": "Existed_Raid", 00:08:37.032 "uuid": "bbf50634-7885-4aa2-a85e-fb9dd44894cd", 00:08:37.032 "strip_size_kb": 64, 00:08:37.032 "state": "configuring", 00:08:37.032 "raid_level": "raid0", 00:08:37.032 "superblock": true, 00:08:37.032 "num_base_bdevs": 3, 00:08:37.032 "num_base_bdevs_discovered": 2, 00:08:37.032 "num_base_bdevs_operational": 3, 00:08:37.032 "base_bdevs_list": [ 00:08:37.032 { 00:08:37.032 "name": "BaseBdev1", 00:08:37.032 "uuid": "b678ec44-6453-4447-971d-537f84ed6b39", 00:08:37.032 "is_configured": true, 00:08:37.032 "data_offset": 2048, 00:08:37.032 "data_size": 63488 00:08:37.032 }, 00:08:37.032 { 00:08:37.032 "name": null, 00:08:37.032 "uuid": "68d481d8-7399-409d-98aa-9ffbe57c6b35", 00:08:37.032 "is_configured": false, 00:08:37.032 "data_offset": 0, 00:08:37.032 "data_size": 63488 00:08:37.032 }, 00:08:37.032 { 00:08:37.032 "name": "BaseBdev3", 00:08:37.032 "uuid": "ce0d913d-2317-4a2d-b438-b1edd26b224d", 00:08:37.032 "is_configured": true, 00:08:37.032 "data_offset": 2048, 00:08:37.032 "data_size": 63488 00:08:37.032 } 00:08:37.032 ] 00:08:37.032 }' 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.032 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.603 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.603 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.603 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.603 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:37.603 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.603 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:37.603 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:37.603 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.603 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.603 [2024-11-19 12:28:42.641022] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:37.603 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.603 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:37.603 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.603 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.603 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.603 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.603 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.603 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.603 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.603 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.603 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.603 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.603 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.603 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.603 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.603 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.603 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.603 "name": "Existed_Raid", 00:08:37.603 "uuid": "bbf50634-7885-4aa2-a85e-fb9dd44894cd", 00:08:37.603 "strip_size_kb": 64, 00:08:37.603 "state": "configuring", 00:08:37.603 "raid_level": "raid0", 00:08:37.603 "superblock": true, 00:08:37.603 "num_base_bdevs": 3, 00:08:37.603 "num_base_bdevs_discovered": 1, 00:08:37.603 "num_base_bdevs_operational": 3, 00:08:37.603 "base_bdevs_list": [ 00:08:37.603 { 00:08:37.603 "name": "BaseBdev1", 00:08:37.603 "uuid": "b678ec44-6453-4447-971d-537f84ed6b39", 00:08:37.603 "is_configured": true, 00:08:37.603 "data_offset": 2048, 00:08:37.603 "data_size": 63488 00:08:37.603 }, 00:08:37.603 { 00:08:37.603 "name": null, 00:08:37.603 "uuid": "68d481d8-7399-409d-98aa-9ffbe57c6b35", 00:08:37.603 "is_configured": false, 00:08:37.603 "data_offset": 0, 00:08:37.603 "data_size": 63488 00:08:37.603 }, 00:08:37.603 { 00:08:37.603 "name": null, 00:08:37.603 "uuid": "ce0d913d-2317-4a2d-b438-b1edd26b224d", 00:08:37.603 "is_configured": false, 00:08:37.603 "data_offset": 0, 00:08:37.603 "data_size": 63488 00:08:37.603 } 00:08:37.603 ] 00:08:37.603 }' 00:08:37.603 12:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.603 12:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.175 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.175 12:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.175 12:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.175 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:38.175 12:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.175 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:38.175 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:38.175 12:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.175 12:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.175 [2024-11-19 12:28:43.176155] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:38.175 12:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.175 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:38.175 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.175 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.175 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.175 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.175 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.175 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.175 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.175 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.175 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.175 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.175 12:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.175 12:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.175 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.175 12:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.175 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.175 "name": "Existed_Raid", 00:08:38.175 "uuid": "bbf50634-7885-4aa2-a85e-fb9dd44894cd", 00:08:38.175 "strip_size_kb": 64, 00:08:38.175 "state": "configuring", 00:08:38.175 "raid_level": "raid0", 00:08:38.175 "superblock": true, 00:08:38.175 "num_base_bdevs": 3, 00:08:38.175 "num_base_bdevs_discovered": 2, 00:08:38.175 "num_base_bdevs_operational": 3, 00:08:38.175 "base_bdevs_list": [ 00:08:38.175 { 00:08:38.175 "name": "BaseBdev1", 00:08:38.175 "uuid": "b678ec44-6453-4447-971d-537f84ed6b39", 00:08:38.175 "is_configured": true, 00:08:38.175 "data_offset": 2048, 00:08:38.175 "data_size": 63488 00:08:38.175 }, 00:08:38.175 { 00:08:38.175 "name": null, 00:08:38.175 "uuid": "68d481d8-7399-409d-98aa-9ffbe57c6b35", 00:08:38.175 "is_configured": false, 00:08:38.175 "data_offset": 0, 00:08:38.175 "data_size": 63488 00:08:38.175 }, 00:08:38.175 { 00:08:38.175 "name": "BaseBdev3", 00:08:38.175 "uuid": "ce0d913d-2317-4a2d-b438-b1edd26b224d", 00:08:38.175 "is_configured": true, 00:08:38.175 "data_offset": 2048, 00:08:38.175 "data_size": 63488 00:08:38.175 } 00:08:38.175 ] 00:08:38.175 }' 00:08:38.175 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.175 12:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.437 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.437 12:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.437 12:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.437 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:38.437 12:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.696 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:38.696 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:38.696 12:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.696 12:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.696 [2024-11-19 12:28:43.703267] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:38.696 12:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.696 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:38.696 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.696 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.696 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.696 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.697 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.697 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.697 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.697 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.697 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.697 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.697 12:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.697 12:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.697 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.697 12:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.697 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.697 "name": "Existed_Raid", 00:08:38.697 "uuid": "bbf50634-7885-4aa2-a85e-fb9dd44894cd", 00:08:38.697 "strip_size_kb": 64, 00:08:38.697 "state": "configuring", 00:08:38.697 "raid_level": "raid0", 00:08:38.697 "superblock": true, 00:08:38.697 "num_base_bdevs": 3, 00:08:38.697 "num_base_bdevs_discovered": 1, 00:08:38.697 "num_base_bdevs_operational": 3, 00:08:38.697 "base_bdevs_list": [ 00:08:38.697 { 00:08:38.697 "name": null, 00:08:38.697 "uuid": "b678ec44-6453-4447-971d-537f84ed6b39", 00:08:38.697 "is_configured": false, 00:08:38.697 "data_offset": 0, 00:08:38.697 "data_size": 63488 00:08:38.697 }, 00:08:38.697 { 00:08:38.697 "name": null, 00:08:38.697 "uuid": "68d481d8-7399-409d-98aa-9ffbe57c6b35", 00:08:38.697 "is_configured": false, 00:08:38.697 "data_offset": 0, 00:08:38.697 "data_size": 63488 00:08:38.697 }, 00:08:38.697 { 00:08:38.697 "name": "BaseBdev3", 00:08:38.697 "uuid": "ce0d913d-2317-4a2d-b438-b1edd26b224d", 00:08:38.697 "is_configured": true, 00:08:38.697 "data_offset": 2048, 00:08:38.697 "data_size": 63488 00:08:38.697 } 00:08:38.697 ] 00:08:38.697 }' 00:08:38.697 12:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.697 12:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.956 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.956 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.956 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:38.956 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.956 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.956 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:38.956 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:38.956 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.956 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.956 [2024-11-19 12:28:44.189019] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:38.956 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.956 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:38.956 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.956 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.956 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.956 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.956 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.956 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.956 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.956 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.956 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.956 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.956 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.956 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.956 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.956 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.216 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.216 "name": "Existed_Raid", 00:08:39.216 "uuid": "bbf50634-7885-4aa2-a85e-fb9dd44894cd", 00:08:39.216 "strip_size_kb": 64, 00:08:39.216 "state": "configuring", 00:08:39.216 "raid_level": "raid0", 00:08:39.216 "superblock": true, 00:08:39.216 "num_base_bdevs": 3, 00:08:39.216 "num_base_bdevs_discovered": 2, 00:08:39.216 "num_base_bdevs_operational": 3, 00:08:39.216 "base_bdevs_list": [ 00:08:39.216 { 00:08:39.216 "name": null, 00:08:39.216 "uuid": "b678ec44-6453-4447-971d-537f84ed6b39", 00:08:39.216 "is_configured": false, 00:08:39.216 "data_offset": 0, 00:08:39.216 "data_size": 63488 00:08:39.216 }, 00:08:39.216 { 00:08:39.216 "name": "BaseBdev2", 00:08:39.216 "uuid": "68d481d8-7399-409d-98aa-9ffbe57c6b35", 00:08:39.216 "is_configured": true, 00:08:39.216 "data_offset": 2048, 00:08:39.216 "data_size": 63488 00:08:39.216 }, 00:08:39.216 { 00:08:39.216 "name": "BaseBdev3", 00:08:39.216 "uuid": "ce0d913d-2317-4a2d-b438-b1edd26b224d", 00:08:39.216 "is_configured": true, 00:08:39.216 "data_offset": 2048, 00:08:39.216 "data_size": 63488 00:08:39.216 } 00:08:39.216 ] 00:08:39.216 }' 00:08:39.216 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.216 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.476 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.476 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:39.476 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.476 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.476 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.476 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:39.476 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.476 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:39.476 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.476 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.476 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.476 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b678ec44-6453-4447-971d-537f84ed6b39 00:08:39.476 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.476 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.736 [2024-11-19 12:28:44.739039] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:39.736 [2024-11-19 12:28:44.739209] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:39.736 [2024-11-19 12:28:44.739226] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:39.736 [2024-11-19 12:28:44.739491] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:39.736 [2024-11-19 12:28:44.739604] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:39.736 [2024-11-19 12:28:44.739614] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:39.736 NewBaseBdev 00:08:39.736 [2024-11-19 12:28:44.739716] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.736 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.736 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:39.736 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:39.736 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:39.736 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:39.736 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:39.736 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:39.736 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:39.736 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.736 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.736 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.736 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:39.736 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.736 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.736 [ 00:08:39.736 { 00:08:39.736 "name": "NewBaseBdev", 00:08:39.736 "aliases": [ 00:08:39.736 "b678ec44-6453-4447-971d-537f84ed6b39" 00:08:39.736 ], 00:08:39.736 "product_name": "Malloc disk", 00:08:39.736 "block_size": 512, 00:08:39.736 "num_blocks": 65536, 00:08:39.736 "uuid": "b678ec44-6453-4447-971d-537f84ed6b39", 00:08:39.736 "assigned_rate_limits": { 00:08:39.736 "rw_ios_per_sec": 0, 00:08:39.736 "rw_mbytes_per_sec": 0, 00:08:39.736 "r_mbytes_per_sec": 0, 00:08:39.736 "w_mbytes_per_sec": 0 00:08:39.736 }, 00:08:39.736 "claimed": true, 00:08:39.736 "claim_type": "exclusive_write", 00:08:39.736 "zoned": false, 00:08:39.736 "supported_io_types": { 00:08:39.736 "read": true, 00:08:39.736 "write": true, 00:08:39.736 "unmap": true, 00:08:39.736 "flush": true, 00:08:39.736 "reset": true, 00:08:39.736 "nvme_admin": false, 00:08:39.736 "nvme_io": false, 00:08:39.736 "nvme_io_md": false, 00:08:39.736 "write_zeroes": true, 00:08:39.736 "zcopy": true, 00:08:39.736 "get_zone_info": false, 00:08:39.736 "zone_management": false, 00:08:39.736 "zone_append": false, 00:08:39.736 "compare": false, 00:08:39.736 "compare_and_write": false, 00:08:39.736 "abort": true, 00:08:39.736 "seek_hole": false, 00:08:39.736 "seek_data": false, 00:08:39.736 "copy": true, 00:08:39.736 "nvme_iov_md": false 00:08:39.736 }, 00:08:39.736 "memory_domains": [ 00:08:39.736 { 00:08:39.736 "dma_device_id": "system", 00:08:39.736 "dma_device_type": 1 00:08:39.736 }, 00:08:39.736 { 00:08:39.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.736 "dma_device_type": 2 00:08:39.736 } 00:08:39.736 ], 00:08:39.736 "driver_specific": {} 00:08:39.736 } 00:08:39.736 ] 00:08:39.736 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.736 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:39.736 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:39.736 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.736 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:39.736 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.736 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.736 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.736 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.736 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.736 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.736 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.736 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.736 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.736 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.736 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.736 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.736 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.736 "name": "Existed_Raid", 00:08:39.736 "uuid": "bbf50634-7885-4aa2-a85e-fb9dd44894cd", 00:08:39.736 "strip_size_kb": 64, 00:08:39.736 "state": "online", 00:08:39.736 "raid_level": "raid0", 00:08:39.736 "superblock": true, 00:08:39.736 "num_base_bdevs": 3, 00:08:39.736 "num_base_bdevs_discovered": 3, 00:08:39.736 "num_base_bdevs_operational": 3, 00:08:39.736 "base_bdevs_list": [ 00:08:39.736 { 00:08:39.736 "name": "NewBaseBdev", 00:08:39.736 "uuid": "b678ec44-6453-4447-971d-537f84ed6b39", 00:08:39.736 "is_configured": true, 00:08:39.736 "data_offset": 2048, 00:08:39.736 "data_size": 63488 00:08:39.736 }, 00:08:39.736 { 00:08:39.736 "name": "BaseBdev2", 00:08:39.736 "uuid": "68d481d8-7399-409d-98aa-9ffbe57c6b35", 00:08:39.736 "is_configured": true, 00:08:39.736 "data_offset": 2048, 00:08:39.736 "data_size": 63488 00:08:39.736 }, 00:08:39.736 { 00:08:39.736 "name": "BaseBdev3", 00:08:39.736 "uuid": "ce0d913d-2317-4a2d-b438-b1edd26b224d", 00:08:39.736 "is_configured": true, 00:08:39.737 "data_offset": 2048, 00:08:39.737 "data_size": 63488 00:08:39.737 } 00:08:39.737 ] 00:08:39.737 }' 00:08:39.737 12:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.737 12:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.996 12:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:39.997 12:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:39.997 12:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:39.997 12:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:39.997 12:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:39.997 12:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:39.997 12:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:39.997 12:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.997 12:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:39.997 12:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.997 [2024-11-19 12:28:45.210789] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:39.997 12:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.997 12:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:39.997 "name": "Existed_Raid", 00:08:39.997 "aliases": [ 00:08:39.997 "bbf50634-7885-4aa2-a85e-fb9dd44894cd" 00:08:39.997 ], 00:08:39.997 "product_name": "Raid Volume", 00:08:39.997 "block_size": 512, 00:08:39.997 "num_blocks": 190464, 00:08:39.997 "uuid": "bbf50634-7885-4aa2-a85e-fb9dd44894cd", 00:08:39.997 "assigned_rate_limits": { 00:08:39.997 "rw_ios_per_sec": 0, 00:08:39.997 "rw_mbytes_per_sec": 0, 00:08:39.997 "r_mbytes_per_sec": 0, 00:08:39.997 "w_mbytes_per_sec": 0 00:08:39.997 }, 00:08:39.997 "claimed": false, 00:08:39.997 "zoned": false, 00:08:39.997 "supported_io_types": { 00:08:39.997 "read": true, 00:08:39.997 "write": true, 00:08:39.997 "unmap": true, 00:08:39.997 "flush": true, 00:08:39.997 "reset": true, 00:08:39.997 "nvme_admin": false, 00:08:39.997 "nvme_io": false, 00:08:39.997 "nvme_io_md": false, 00:08:39.997 "write_zeroes": true, 00:08:39.997 "zcopy": false, 00:08:39.997 "get_zone_info": false, 00:08:39.997 "zone_management": false, 00:08:39.997 "zone_append": false, 00:08:39.997 "compare": false, 00:08:39.997 "compare_and_write": false, 00:08:39.997 "abort": false, 00:08:39.997 "seek_hole": false, 00:08:39.997 "seek_data": false, 00:08:39.997 "copy": false, 00:08:39.997 "nvme_iov_md": false 00:08:39.997 }, 00:08:39.997 "memory_domains": [ 00:08:39.997 { 00:08:39.997 "dma_device_id": "system", 00:08:39.997 "dma_device_type": 1 00:08:39.997 }, 00:08:39.997 { 00:08:39.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.997 "dma_device_type": 2 00:08:39.997 }, 00:08:39.997 { 00:08:39.997 "dma_device_id": "system", 00:08:39.997 "dma_device_type": 1 00:08:39.997 }, 00:08:39.997 { 00:08:39.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.997 "dma_device_type": 2 00:08:39.997 }, 00:08:39.997 { 00:08:39.997 "dma_device_id": "system", 00:08:39.997 "dma_device_type": 1 00:08:39.997 }, 00:08:39.997 { 00:08:39.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.997 "dma_device_type": 2 00:08:39.997 } 00:08:39.997 ], 00:08:39.997 "driver_specific": { 00:08:39.997 "raid": { 00:08:39.997 "uuid": "bbf50634-7885-4aa2-a85e-fb9dd44894cd", 00:08:39.997 "strip_size_kb": 64, 00:08:39.997 "state": "online", 00:08:39.997 "raid_level": "raid0", 00:08:39.997 "superblock": true, 00:08:39.997 "num_base_bdevs": 3, 00:08:39.997 "num_base_bdevs_discovered": 3, 00:08:39.997 "num_base_bdevs_operational": 3, 00:08:39.997 "base_bdevs_list": [ 00:08:39.997 { 00:08:39.997 "name": "NewBaseBdev", 00:08:39.997 "uuid": "b678ec44-6453-4447-971d-537f84ed6b39", 00:08:39.997 "is_configured": true, 00:08:39.997 "data_offset": 2048, 00:08:39.997 "data_size": 63488 00:08:39.997 }, 00:08:39.997 { 00:08:39.997 "name": "BaseBdev2", 00:08:39.997 "uuid": "68d481d8-7399-409d-98aa-9ffbe57c6b35", 00:08:39.997 "is_configured": true, 00:08:39.997 "data_offset": 2048, 00:08:39.997 "data_size": 63488 00:08:39.997 }, 00:08:39.997 { 00:08:39.997 "name": "BaseBdev3", 00:08:39.997 "uuid": "ce0d913d-2317-4a2d-b438-b1edd26b224d", 00:08:39.997 "is_configured": true, 00:08:39.997 "data_offset": 2048, 00:08:39.997 "data_size": 63488 00:08:39.997 } 00:08:39.997 ] 00:08:39.997 } 00:08:39.997 } 00:08:39.997 }' 00:08:39.997 12:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:40.257 12:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:40.257 BaseBdev2 00:08:40.257 BaseBdev3' 00:08:40.257 12:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.257 12:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:40.257 12:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.257 12:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.257 12:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:40.257 12:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.257 12:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.257 12:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.257 12:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.257 12:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.257 12:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.257 12:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:40.257 12:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.257 12:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.257 12:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.257 12:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.258 12:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.258 12:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.258 12:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.258 12:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.258 12:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:40.258 12:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.258 12:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.258 12:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.258 12:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.258 12:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.258 12:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:40.258 12:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.258 12:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.258 [2024-11-19 12:28:45.458067] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:40.258 [2024-11-19 12:28:45.458107] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:40.258 [2024-11-19 12:28:45.458204] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:40.258 [2024-11-19 12:28:45.458259] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:40.258 [2024-11-19 12:28:45.458278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:40.258 12:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.258 12:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75809 00:08:40.258 12:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 75809 ']' 00:08:40.258 12:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 75809 00:08:40.258 12:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:40.258 12:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:40.258 12:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75809 00:08:40.258 12:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:40.258 12:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:40.258 12:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75809' 00:08:40.258 killing process with pid 75809 00:08:40.258 12:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 75809 00:08:40.258 [2024-11-19 12:28:45.507437] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:40.258 12:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 75809 00:08:40.518 [2024-11-19 12:28:45.539555] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:40.777 12:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:40.777 00:08:40.777 real 0m8.879s 00:08:40.777 user 0m15.120s 00:08:40.777 sys 0m1.778s 00:08:40.777 12:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.778 12:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.778 ************************************ 00:08:40.778 END TEST raid_state_function_test_sb 00:08:40.778 ************************************ 00:08:40.778 12:28:45 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:40.778 12:28:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:40.778 12:28:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:40.778 12:28:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:40.778 ************************************ 00:08:40.778 START TEST raid_superblock_test 00:08:40.778 ************************************ 00:08:40.778 12:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:08:40.778 12:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:40.778 12:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:40.778 12:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:40.778 12:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:40.778 12:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:40.778 12:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:40.778 12:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:40.778 12:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:40.778 12:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:40.778 12:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:40.778 12:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:40.778 12:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:40.778 12:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:40.778 12:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:40.778 12:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:40.778 12:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:40.778 12:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=76418 00:08:40.778 12:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:40.778 12:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 76418 00:08:40.778 12:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 76418 ']' 00:08:40.778 12:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.778 12:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:40.778 12:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.778 12:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:40.778 12:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.778 [2024-11-19 12:28:45.957314] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:40.778 [2024-11-19 12:28:45.957550] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76418 ] 00:08:41.037 [2024-11-19 12:28:46.098619] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.037 [2024-11-19 12:28:46.152031] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.037 [2024-11-19 12:28:46.195088] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.037 [2024-11-19 12:28:46.195225] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.978 malloc1 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.978 [2024-11-19 12:28:46.901878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:41.978 [2024-11-19 12:28:46.902046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.978 [2024-11-19 12:28:46.902089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:41.978 [2024-11-19 12:28:46.902158] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.978 [2024-11-19 12:28:46.904328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.978 [2024-11-19 12:28:46.904419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:41.978 pt1 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.978 malloc2 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.978 [2024-11-19 12:28:46.943981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:41.978 [2024-11-19 12:28:46.944165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.978 [2024-11-19 12:28:46.944207] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:41.978 [2024-11-19 12:28:46.944255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.978 [2024-11-19 12:28:46.946843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.978 [2024-11-19 12:28:46.946923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:41.978 pt2 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.978 malloc3 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.978 [2024-11-19 12:28:46.972766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:41.978 [2024-11-19 12:28:46.972923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.978 [2024-11-19 12:28:46.972976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:41.978 [2024-11-19 12:28:46.973006] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.978 [2024-11-19 12:28:46.975188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.978 [2024-11-19 12:28:46.975265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:41.978 pt3 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.978 [2024-11-19 12:28:46.984793] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:41.978 [2024-11-19 12:28:46.986709] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:41.978 [2024-11-19 12:28:46.986854] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:41.978 [2024-11-19 12:28:46.987036] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:41.978 [2024-11-19 12:28:46.987082] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:41.978 [2024-11-19 12:28:46.987389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:41.978 [2024-11-19 12:28:46.987566] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:41.978 [2024-11-19 12:28:46.987613] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:08:41.978 [2024-11-19 12:28:46.987863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.978 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.979 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.979 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.979 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.979 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.979 12:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:41.979 12:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.979 12:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.979 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.979 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.979 "name": "raid_bdev1", 00:08:41.979 "uuid": "fc1121c1-378e-468b-8a49-7239798acf20", 00:08:41.979 "strip_size_kb": 64, 00:08:41.979 "state": "online", 00:08:41.979 "raid_level": "raid0", 00:08:41.979 "superblock": true, 00:08:41.979 "num_base_bdevs": 3, 00:08:41.979 "num_base_bdevs_discovered": 3, 00:08:41.979 "num_base_bdevs_operational": 3, 00:08:41.979 "base_bdevs_list": [ 00:08:41.979 { 00:08:41.979 "name": "pt1", 00:08:41.979 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:41.979 "is_configured": true, 00:08:41.979 "data_offset": 2048, 00:08:41.979 "data_size": 63488 00:08:41.979 }, 00:08:41.979 { 00:08:41.979 "name": "pt2", 00:08:41.979 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:41.979 "is_configured": true, 00:08:41.979 "data_offset": 2048, 00:08:41.979 "data_size": 63488 00:08:41.979 }, 00:08:41.979 { 00:08:41.979 "name": "pt3", 00:08:41.979 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:41.979 "is_configured": true, 00:08:41.979 "data_offset": 2048, 00:08:41.979 "data_size": 63488 00:08:41.979 } 00:08:41.979 ] 00:08:41.979 }' 00:08:41.979 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.979 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.239 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:42.239 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:42.239 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:42.239 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:42.239 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:42.239 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:42.239 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:42.239 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.239 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.239 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:42.239 [2024-11-19 12:28:47.448280] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.239 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.239 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:42.239 "name": "raid_bdev1", 00:08:42.239 "aliases": [ 00:08:42.239 "fc1121c1-378e-468b-8a49-7239798acf20" 00:08:42.239 ], 00:08:42.239 "product_name": "Raid Volume", 00:08:42.239 "block_size": 512, 00:08:42.239 "num_blocks": 190464, 00:08:42.239 "uuid": "fc1121c1-378e-468b-8a49-7239798acf20", 00:08:42.239 "assigned_rate_limits": { 00:08:42.239 "rw_ios_per_sec": 0, 00:08:42.239 "rw_mbytes_per_sec": 0, 00:08:42.239 "r_mbytes_per_sec": 0, 00:08:42.239 "w_mbytes_per_sec": 0 00:08:42.239 }, 00:08:42.239 "claimed": false, 00:08:42.239 "zoned": false, 00:08:42.239 "supported_io_types": { 00:08:42.239 "read": true, 00:08:42.239 "write": true, 00:08:42.239 "unmap": true, 00:08:42.239 "flush": true, 00:08:42.239 "reset": true, 00:08:42.239 "nvme_admin": false, 00:08:42.239 "nvme_io": false, 00:08:42.239 "nvme_io_md": false, 00:08:42.239 "write_zeroes": true, 00:08:42.239 "zcopy": false, 00:08:42.239 "get_zone_info": false, 00:08:42.239 "zone_management": false, 00:08:42.239 "zone_append": false, 00:08:42.239 "compare": false, 00:08:42.239 "compare_and_write": false, 00:08:42.239 "abort": false, 00:08:42.239 "seek_hole": false, 00:08:42.239 "seek_data": false, 00:08:42.239 "copy": false, 00:08:42.239 "nvme_iov_md": false 00:08:42.239 }, 00:08:42.239 "memory_domains": [ 00:08:42.239 { 00:08:42.239 "dma_device_id": "system", 00:08:42.239 "dma_device_type": 1 00:08:42.239 }, 00:08:42.239 { 00:08:42.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.239 "dma_device_type": 2 00:08:42.239 }, 00:08:42.239 { 00:08:42.239 "dma_device_id": "system", 00:08:42.239 "dma_device_type": 1 00:08:42.239 }, 00:08:42.239 { 00:08:42.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.239 "dma_device_type": 2 00:08:42.239 }, 00:08:42.239 { 00:08:42.239 "dma_device_id": "system", 00:08:42.239 "dma_device_type": 1 00:08:42.239 }, 00:08:42.239 { 00:08:42.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.239 "dma_device_type": 2 00:08:42.239 } 00:08:42.239 ], 00:08:42.239 "driver_specific": { 00:08:42.239 "raid": { 00:08:42.239 "uuid": "fc1121c1-378e-468b-8a49-7239798acf20", 00:08:42.239 "strip_size_kb": 64, 00:08:42.239 "state": "online", 00:08:42.239 "raid_level": "raid0", 00:08:42.239 "superblock": true, 00:08:42.239 "num_base_bdevs": 3, 00:08:42.239 "num_base_bdevs_discovered": 3, 00:08:42.239 "num_base_bdevs_operational": 3, 00:08:42.239 "base_bdevs_list": [ 00:08:42.239 { 00:08:42.239 "name": "pt1", 00:08:42.239 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:42.239 "is_configured": true, 00:08:42.239 "data_offset": 2048, 00:08:42.239 "data_size": 63488 00:08:42.239 }, 00:08:42.239 { 00:08:42.239 "name": "pt2", 00:08:42.239 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:42.239 "is_configured": true, 00:08:42.239 "data_offset": 2048, 00:08:42.239 "data_size": 63488 00:08:42.239 }, 00:08:42.239 { 00:08:42.239 "name": "pt3", 00:08:42.239 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:42.239 "is_configured": true, 00:08:42.239 "data_offset": 2048, 00:08:42.239 "data_size": 63488 00:08:42.239 } 00:08:42.239 ] 00:08:42.239 } 00:08:42.239 } 00:08:42.239 }' 00:08:42.239 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:42.499 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:42.499 pt2 00:08:42.499 pt3' 00:08:42.499 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.499 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:42.499 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.499 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:42.499 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.499 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.499 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.499 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.499 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.499 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.499 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.499 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:42.499 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.499 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.499 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.499 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.499 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.499 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.499 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.499 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:42.499 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.499 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.499 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.499 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.499 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.499 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.499 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:42.499 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:42.499 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.499 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.499 [2024-11-19 12:28:47.743751] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fc1121c1-378e-468b-8a49-7239798acf20 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z fc1121c1-378e-468b-8a49-7239798acf20 ']' 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.760 [2024-11-19 12:28:47.791344] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:42.760 [2024-11-19 12:28:47.791438] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:42.760 [2024-11-19 12:28:47.791563] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.760 [2024-11-19 12:28:47.791654] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:42.760 [2024-11-19 12:28:47.791712] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.760 [2024-11-19 12:28:47.943149] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:42.760 [2024-11-19 12:28:47.945145] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:42.760 [2024-11-19 12:28:47.945198] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:42.760 [2024-11-19 12:28:47.945251] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:42.760 [2024-11-19 12:28:47.945315] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:42.760 [2024-11-19 12:28:47.945333] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:42.760 [2024-11-19 12:28:47.945347] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:42.760 [2024-11-19 12:28:47.945358] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:08:42.760 request: 00:08:42.760 { 00:08:42.760 "name": "raid_bdev1", 00:08:42.760 "raid_level": "raid0", 00:08:42.760 "base_bdevs": [ 00:08:42.760 "malloc1", 00:08:42.760 "malloc2", 00:08:42.760 "malloc3" 00:08:42.760 ], 00:08:42.760 "strip_size_kb": 64, 00:08:42.760 "superblock": false, 00:08:42.760 "method": "bdev_raid_create", 00:08:42.760 "req_id": 1 00:08:42.760 } 00:08:42.760 Got JSON-RPC error response 00:08:42.760 response: 00:08:42.760 { 00:08:42.760 "code": -17, 00:08:42.760 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:42.760 } 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.760 12:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.760 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:42.760 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:42.760 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:42.760 12:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.760 12:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.760 [2024-11-19 12:28:48.010994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:42.760 [2024-11-19 12:28:48.011165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.760 [2024-11-19 12:28:48.011201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:42.760 [2024-11-19 12:28:48.011231] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.760 [2024-11-19 12:28:48.013465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.760 [2024-11-19 12:28:48.013548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:42.760 [2024-11-19 12:28:48.013683] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:42.760 [2024-11-19 12:28:48.013792] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:42.760 pt1 00:08:42.760 12:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.760 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:42.760 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:42.760 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.760 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.760 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.020 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.020 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.020 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.020 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.020 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.020 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.020 12:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.020 12:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.020 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.020 12:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.020 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.020 "name": "raid_bdev1", 00:08:43.020 "uuid": "fc1121c1-378e-468b-8a49-7239798acf20", 00:08:43.020 "strip_size_kb": 64, 00:08:43.020 "state": "configuring", 00:08:43.020 "raid_level": "raid0", 00:08:43.020 "superblock": true, 00:08:43.020 "num_base_bdevs": 3, 00:08:43.020 "num_base_bdevs_discovered": 1, 00:08:43.020 "num_base_bdevs_operational": 3, 00:08:43.020 "base_bdevs_list": [ 00:08:43.020 { 00:08:43.020 "name": "pt1", 00:08:43.020 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.020 "is_configured": true, 00:08:43.020 "data_offset": 2048, 00:08:43.020 "data_size": 63488 00:08:43.020 }, 00:08:43.020 { 00:08:43.020 "name": null, 00:08:43.020 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.020 "is_configured": false, 00:08:43.020 "data_offset": 2048, 00:08:43.020 "data_size": 63488 00:08:43.020 }, 00:08:43.020 { 00:08:43.020 "name": null, 00:08:43.020 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:43.020 "is_configured": false, 00:08:43.020 "data_offset": 2048, 00:08:43.020 "data_size": 63488 00:08:43.020 } 00:08:43.020 ] 00:08:43.020 }' 00:08:43.020 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.020 12:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.280 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:43.280 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:43.280 12:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.280 12:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.280 [2024-11-19 12:28:48.462226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:43.280 [2024-11-19 12:28:48.462382] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.280 [2024-11-19 12:28:48.462407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:43.280 [2024-11-19 12:28:48.462421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.280 [2024-11-19 12:28:48.462868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.280 [2024-11-19 12:28:48.462891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:43.280 [2024-11-19 12:28:48.462969] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:43.280 [2024-11-19 12:28:48.462994] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:43.280 pt2 00:08:43.280 12:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.280 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:43.280 12:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.280 12:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.280 [2024-11-19 12:28:48.470205] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:43.280 12:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.280 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:43.280 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:43.281 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.281 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.281 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.281 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.281 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.281 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.281 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.281 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.281 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.281 12:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.281 12:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.281 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.281 12:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.281 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.281 "name": "raid_bdev1", 00:08:43.281 "uuid": "fc1121c1-378e-468b-8a49-7239798acf20", 00:08:43.281 "strip_size_kb": 64, 00:08:43.281 "state": "configuring", 00:08:43.281 "raid_level": "raid0", 00:08:43.281 "superblock": true, 00:08:43.281 "num_base_bdevs": 3, 00:08:43.281 "num_base_bdevs_discovered": 1, 00:08:43.281 "num_base_bdevs_operational": 3, 00:08:43.281 "base_bdevs_list": [ 00:08:43.281 { 00:08:43.281 "name": "pt1", 00:08:43.281 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.281 "is_configured": true, 00:08:43.281 "data_offset": 2048, 00:08:43.281 "data_size": 63488 00:08:43.281 }, 00:08:43.281 { 00:08:43.281 "name": null, 00:08:43.281 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.281 "is_configured": false, 00:08:43.281 "data_offset": 0, 00:08:43.281 "data_size": 63488 00:08:43.281 }, 00:08:43.281 { 00:08:43.281 "name": null, 00:08:43.281 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:43.281 "is_configured": false, 00:08:43.281 "data_offset": 2048, 00:08:43.281 "data_size": 63488 00:08:43.281 } 00:08:43.281 ] 00:08:43.281 }' 00:08:43.281 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.281 12:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.850 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:43.850 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:43.850 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:43.850 12:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.850 12:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.850 [2024-11-19 12:28:48.897513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:43.850 [2024-11-19 12:28:48.897603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.850 [2024-11-19 12:28:48.897624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:43.850 [2024-11-19 12:28:48.897633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.850 [2024-11-19 12:28:48.898061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.850 [2024-11-19 12:28:48.898085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:43.850 [2024-11-19 12:28:48.898165] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:43.850 [2024-11-19 12:28:48.898187] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:43.850 pt2 00:08:43.850 12:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.850 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:43.850 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:43.850 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:43.850 12:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.850 12:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.850 [2024-11-19 12:28:48.909502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:43.850 [2024-11-19 12:28:48.909609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.850 [2024-11-19 12:28:48.909632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:43.850 [2024-11-19 12:28:48.909640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.851 [2024-11-19 12:28:48.910071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.851 [2024-11-19 12:28:48.910099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:43.851 [2024-11-19 12:28:48.910181] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:43.851 [2024-11-19 12:28:48.910204] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:43.851 [2024-11-19 12:28:48.910304] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:43.851 [2024-11-19 12:28:48.910322] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:43.851 [2024-11-19 12:28:48.910564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:43.851 [2024-11-19 12:28:48.910675] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:43.851 [2024-11-19 12:28:48.910685] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:43.851 [2024-11-19 12:28:48.910835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.851 pt3 00:08:43.851 12:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.851 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:43.851 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:43.851 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:43.851 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:43.851 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.851 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.851 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.851 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.851 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.851 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.851 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.851 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.851 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.851 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.851 12:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.851 12:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.851 12:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.851 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.851 "name": "raid_bdev1", 00:08:43.851 "uuid": "fc1121c1-378e-468b-8a49-7239798acf20", 00:08:43.851 "strip_size_kb": 64, 00:08:43.851 "state": "online", 00:08:43.851 "raid_level": "raid0", 00:08:43.851 "superblock": true, 00:08:43.851 "num_base_bdevs": 3, 00:08:43.851 "num_base_bdevs_discovered": 3, 00:08:43.851 "num_base_bdevs_operational": 3, 00:08:43.851 "base_bdevs_list": [ 00:08:43.851 { 00:08:43.851 "name": "pt1", 00:08:43.851 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.851 "is_configured": true, 00:08:43.851 "data_offset": 2048, 00:08:43.851 "data_size": 63488 00:08:43.851 }, 00:08:43.851 { 00:08:43.851 "name": "pt2", 00:08:43.851 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.851 "is_configured": true, 00:08:43.851 "data_offset": 2048, 00:08:43.851 "data_size": 63488 00:08:43.851 }, 00:08:43.851 { 00:08:43.851 "name": "pt3", 00:08:43.851 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:43.851 "is_configured": true, 00:08:43.851 "data_offset": 2048, 00:08:43.851 "data_size": 63488 00:08:43.851 } 00:08:43.851 ] 00:08:43.851 }' 00:08:43.851 12:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.851 12:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.112 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:44.112 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:44.112 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:44.112 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:44.112 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:44.112 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:44.112 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:44.112 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:44.112 12:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.112 12:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.112 [2024-11-19 12:28:49.345022] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.112 12:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:44.374 "name": "raid_bdev1", 00:08:44.374 "aliases": [ 00:08:44.374 "fc1121c1-378e-468b-8a49-7239798acf20" 00:08:44.374 ], 00:08:44.374 "product_name": "Raid Volume", 00:08:44.374 "block_size": 512, 00:08:44.374 "num_blocks": 190464, 00:08:44.374 "uuid": "fc1121c1-378e-468b-8a49-7239798acf20", 00:08:44.374 "assigned_rate_limits": { 00:08:44.374 "rw_ios_per_sec": 0, 00:08:44.374 "rw_mbytes_per_sec": 0, 00:08:44.374 "r_mbytes_per_sec": 0, 00:08:44.374 "w_mbytes_per_sec": 0 00:08:44.374 }, 00:08:44.374 "claimed": false, 00:08:44.374 "zoned": false, 00:08:44.374 "supported_io_types": { 00:08:44.374 "read": true, 00:08:44.374 "write": true, 00:08:44.374 "unmap": true, 00:08:44.374 "flush": true, 00:08:44.374 "reset": true, 00:08:44.374 "nvme_admin": false, 00:08:44.374 "nvme_io": false, 00:08:44.374 "nvme_io_md": false, 00:08:44.374 "write_zeroes": true, 00:08:44.374 "zcopy": false, 00:08:44.374 "get_zone_info": false, 00:08:44.374 "zone_management": false, 00:08:44.374 "zone_append": false, 00:08:44.374 "compare": false, 00:08:44.374 "compare_and_write": false, 00:08:44.374 "abort": false, 00:08:44.374 "seek_hole": false, 00:08:44.374 "seek_data": false, 00:08:44.374 "copy": false, 00:08:44.374 "nvme_iov_md": false 00:08:44.374 }, 00:08:44.374 "memory_domains": [ 00:08:44.374 { 00:08:44.374 "dma_device_id": "system", 00:08:44.374 "dma_device_type": 1 00:08:44.374 }, 00:08:44.374 { 00:08:44.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.374 "dma_device_type": 2 00:08:44.374 }, 00:08:44.374 { 00:08:44.374 "dma_device_id": "system", 00:08:44.374 "dma_device_type": 1 00:08:44.374 }, 00:08:44.374 { 00:08:44.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.374 "dma_device_type": 2 00:08:44.374 }, 00:08:44.374 { 00:08:44.374 "dma_device_id": "system", 00:08:44.374 "dma_device_type": 1 00:08:44.374 }, 00:08:44.374 { 00:08:44.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.374 "dma_device_type": 2 00:08:44.374 } 00:08:44.374 ], 00:08:44.374 "driver_specific": { 00:08:44.374 "raid": { 00:08:44.374 "uuid": "fc1121c1-378e-468b-8a49-7239798acf20", 00:08:44.374 "strip_size_kb": 64, 00:08:44.374 "state": "online", 00:08:44.374 "raid_level": "raid0", 00:08:44.374 "superblock": true, 00:08:44.374 "num_base_bdevs": 3, 00:08:44.374 "num_base_bdevs_discovered": 3, 00:08:44.374 "num_base_bdevs_operational": 3, 00:08:44.374 "base_bdevs_list": [ 00:08:44.374 { 00:08:44.374 "name": "pt1", 00:08:44.374 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:44.374 "is_configured": true, 00:08:44.374 "data_offset": 2048, 00:08:44.374 "data_size": 63488 00:08:44.374 }, 00:08:44.374 { 00:08:44.374 "name": "pt2", 00:08:44.374 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.374 "is_configured": true, 00:08:44.374 "data_offset": 2048, 00:08:44.374 "data_size": 63488 00:08:44.374 }, 00:08:44.374 { 00:08:44.374 "name": "pt3", 00:08:44.374 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:44.374 "is_configured": true, 00:08:44.374 "data_offset": 2048, 00:08:44.374 "data_size": 63488 00:08:44.374 } 00:08:44.374 ] 00:08:44.374 } 00:08:44.374 } 00:08:44.374 }' 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:44.374 pt2 00:08:44.374 pt3' 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.374 [2024-11-19 12:28:49.580566] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' fc1121c1-378e-468b-8a49-7239798acf20 '!=' fc1121c1-378e-468b-8a49-7239798acf20 ']' 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 76418 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 76418 ']' 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 76418 00:08:44.374 12:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:44.375 12:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:44.646 12:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76418 00:08:44.646 killing process with pid 76418 00:08:44.646 12:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:44.646 12:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:44.646 12:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76418' 00:08:44.646 12:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 76418 00:08:44.646 [2024-11-19 12:28:49.652676] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:44.646 [2024-11-19 12:28:49.652780] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.646 [2024-11-19 12:28:49.652846] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:44.646 [2024-11-19 12:28:49.652856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:44.646 12:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 76418 00:08:44.646 [2024-11-19 12:28:49.685619] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:44.927 12:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:44.927 00:08:44.927 real 0m4.062s 00:08:44.927 user 0m6.382s 00:08:44.927 sys 0m0.903s 00:08:44.927 ************************************ 00:08:44.927 END TEST raid_superblock_test 00:08:44.927 ************************************ 00:08:44.927 12:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.927 12:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.927 12:28:49 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:44.927 12:28:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:44.927 12:28:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.927 12:28:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:44.927 ************************************ 00:08:44.927 START TEST raid_read_error_test 00:08:44.927 ************************************ 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6gQJ5l3wfO 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76660 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76660 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 76660 ']' 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:44.927 12:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.927 [2024-11-19 12:28:50.103247] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:44.927 [2024-11-19 12:28:50.103475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76660 ] 00:08:45.187 [2024-11-19 12:28:50.262994] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.187 [2024-11-19 12:28:50.307893] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.187 [2024-11-19 12:28:50.350089] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.187 [2024-11-19 12:28:50.350209] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.759 12:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:45.759 12:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:45.759 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:45.759 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:45.759 12:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.759 12:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.759 BaseBdev1_malloc 00:08:45.759 12:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.759 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:45.759 12:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.759 12:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.759 true 00:08:45.759 12:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.759 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:45.759 12:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.759 12:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.759 [2024-11-19 12:28:50.980520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:45.759 [2024-11-19 12:28:50.980673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.759 [2024-11-19 12:28:50.980716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:45.759 [2024-11-19 12:28:50.980727] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.759 [2024-11-19 12:28:50.982899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.759 [2024-11-19 12:28:50.982936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:45.759 BaseBdev1 00:08:45.759 12:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.759 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:45.759 12:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:45.759 12:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.759 12:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.759 BaseBdev2_malloc 00:08:45.759 12:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.759 12:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:45.759 12:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.759 12:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.020 true 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.020 [2024-11-19 12:28:51.032092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:46.020 [2024-11-19 12:28:51.032152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.020 [2024-11-19 12:28:51.032174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:46.020 [2024-11-19 12:28:51.032182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.020 [2024-11-19 12:28:51.034292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.020 [2024-11-19 12:28:51.034329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:46.020 BaseBdev2 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.020 BaseBdev3_malloc 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.020 true 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.020 [2024-11-19 12:28:51.072828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:46.020 [2024-11-19 12:28:51.072896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.020 [2024-11-19 12:28:51.072930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:46.020 [2024-11-19 12:28:51.072939] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.020 [2024-11-19 12:28:51.074981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.020 [2024-11-19 12:28:51.075085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:46.020 BaseBdev3 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.020 [2024-11-19 12:28:51.084867] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:46.020 [2024-11-19 12:28:51.086644] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:46.020 [2024-11-19 12:28:51.086731] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:46.020 [2024-11-19 12:28:51.086915] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:46.020 [2024-11-19 12:28:51.086939] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:46.020 [2024-11-19 12:28:51.087188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:46.020 [2024-11-19 12:28:51.087330] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:46.020 [2024-11-19 12:28:51.087345] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:46.020 [2024-11-19 12:28:51.087476] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.020 12:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.020 "name": "raid_bdev1", 00:08:46.020 "uuid": "d3d7979b-019e-4962-9583-52d34334d54d", 00:08:46.020 "strip_size_kb": 64, 00:08:46.020 "state": "online", 00:08:46.020 "raid_level": "raid0", 00:08:46.020 "superblock": true, 00:08:46.020 "num_base_bdevs": 3, 00:08:46.020 "num_base_bdevs_discovered": 3, 00:08:46.020 "num_base_bdevs_operational": 3, 00:08:46.020 "base_bdevs_list": [ 00:08:46.020 { 00:08:46.020 "name": "BaseBdev1", 00:08:46.020 "uuid": "04952e5f-d4fd-50fd-9c9f-6a48f0652ba3", 00:08:46.020 "is_configured": true, 00:08:46.020 "data_offset": 2048, 00:08:46.020 "data_size": 63488 00:08:46.020 }, 00:08:46.020 { 00:08:46.021 "name": "BaseBdev2", 00:08:46.021 "uuid": "3d8c8c2c-4cc5-51fe-9a5f-559843d5f417", 00:08:46.021 "is_configured": true, 00:08:46.021 "data_offset": 2048, 00:08:46.021 "data_size": 63488 00:08:46.021 }, 00:08:46.021 { 00:08:46.021 "name": "BaseBdev3", 00:08:46.021 "uuid": "bdca683b-9f9b-53e3-bad2-46a5baa69796", 00:08:46.021 "is_configured": true, 00:08:46.021 "data_offset": 2048, 00:08:46.021 "data_size": 63488 00:08:46.021 } 00:08:46.021 ] 00:08:46.021 }' 00:08:46.021 12:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.021 12:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.281 12:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:46.281 12:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:46.541 [2024-11-19 12:28:51.604406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:47.482 12:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:47.482 12:28:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.482 12:28:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.482 12:28:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.482 12:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:47.482 12:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:47.482 12:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:47.482 12:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:47.482 12:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:47.482 12:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:47.482 12:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.482 12:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.482 12:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.482 12:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.482 12:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.482 12:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.482 12:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.482 12:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.482 12:28:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.482 12:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:47.482 12:28:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.482 12:28:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.482 12:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.482 "name": "raid_bdev1", 00:08:47.482 "uuid": "d3d7979b-019e-4962-9583-52d34334d54d", 00:08:47.482 "strip_size_kb": 64, 00:08:47.482 "state": "online", 00:08:47.482 "raid_level": "raid0", 00:08:47.482 "superblock": true, 00:08:47.482 "num_base_bdevs": 3, 00:08:47.482 "num_base_bdevs_discovered": 3, 00:08:47.482 "num_base_bdevs_operational": 3, 00:08:47.482 "base_bdevs_list": [ 00:08:47.482 { 00:08:47.482 "name": "BaseBdev1", 00:08:47.482 "uuid": "04952e5f-d4fd-50fd-9c9f-6a48f0652ba3", 00:08:47.482 "is_configured": true, 00:08:47.482 "data_offset": 2048, 00:08:47.482 "data_size": 63488 00:08:47.482 }, 00:08:47.482 { 00:08:47.482 "name": "BaseBdev2", 00:08:47.482 "uuid": "3d8c8c2c-4cc5-51fe-9a5f-559843d5f417", 00:08:47.482 "is_configured": true, 00:08:47.482 "data_offset": 2048, 00:08:47.482 "data_size": 63488 00:08:47.482 }, 00:08:47.482 { 00:08:47.482 "name": "BaseBdev3", 00:08:47.482 "uuid": "bdca683b-9f9b-53e3-bad2-46a5baa69796", 00:08:47.482 "is_configured": true, 00:08:47.482 "data_offset": 2048, 00:08:47.482 "data_size": 63488 00:08:47.482 } 00:08:47.482 ] 00:08:47.482 }' 00:08:47.482 12:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.482 12:28:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.742 12:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:47.742 12:28:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.742 12:28:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.742 [2024-11-19 12:28:52.984165] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:47.742 [2024-11-19 12:28:52.984291] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:47.742 [2024-11-19 12:28:52.986854] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:47.742 [2024-11-19 12:28:52.986913] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.742 [2024-11-19 12:28:52.986950] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:47.742 [2024-11-19 12:28:52.986969] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:47.742 12:28:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.742 { 00:08:47.742 "results": [ 00:08:47.742 { 00:08:47.742 "job": "raid_bdev1", 00:08:47.742 "core_mask": "0x1", 00:08:47.742 "workload": "randrw", 00:08:47.742 "percentage": 50, 00:08:47.742 "status": "finished", 00:08:47.742 "queue_depth": 1, 00:08:47.742 "io_size": 131072, 00:08:47.742 "runtime": 1.380636, 00:08:47.742 "iops": 16872.658687735217, 00:08:47.742 "mibps": 2109.082335966902, 00:08:47.743 "io_failed": 1, 00:08:47.743 "io_timeout": 0, 00:08:47.743 "avg_latency_us": 82.2379192859542, 00:08:47.743 "min_latency_us": 21.128384279475984, 00:08:47.743 "max_latency_us": 1345.0620087336245 00:08:47.743 } 00:08:47.743 ], 00:08:47.743 "core_count": 1 00:08:47.743 } 00:08:47.743 12:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76660 00:08:47.743 12:28:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 76660 ']' 00:08:47.743 12:28:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 76660 00:08:47.743 12:28:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:47.743 12:28:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:47.743 12:28:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76660 00:08:48.002 killing process with pid 76660 00:08:48.002 12:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:48.002 12:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:48.002 12:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76660' 00:08:48.002 12:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 76660 00:08:48.003 [2024-11-19 12:28:53.034898] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:48.003 12:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 76660 00:08:48.003 [2024-11-19 12:28:53.060566] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:48.263 12:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6gQJ5l3wfO 00:08:48.263 12:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:48.263 12:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:48.263 12:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:48.263 12:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:48.263 12:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:48.263 12:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:48.263 12:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:48.263 00:08:48.263 real 0m3.302s 00:08:48.263 user 0m4.141s 00:08:48.263 sys 0m0.548s 00:08:48.263 12:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:48.263 ************************************ 00:08:48.263 END TEST raid_read_error_test 00:08:48.263 ************************************ 00:08:48.263 12:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.263 12:28:53 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:48.263 12:28:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:48.263 12:28:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:48.263 12:28:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:48.263 ************************************ 00:08:48.263 START TEST raid_write_error_test 00:08:48.263 ************************************ 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.drU5US0IbY 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76789 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76789 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 76789 ']' 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:48.263 12:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.263 [2024-11-19 12:28:53.483688] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:48.263 [2024-11-19 12:28:53.483978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76789 ] 00:08:48.523 [2024-11-19 12:28:53.647479] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.523 [2024-11-19 12:28:53.692397] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.523 [2024-11-19 12:28:53.734476] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.523 [2024-11-19 12:28:53.734515] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.093 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:49.093 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:49.093 12:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:49.093 12:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:49.093 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.093 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.093 BaseBdev1_malloc 00:08:49.093 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.093 12:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:49.093 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.093 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.093 true 00:08:49.093 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.093 12:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:49.093 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.093 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.093 [2024-11-19 12:28:54.340838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:49.093 [2024-11-19 12:28:54.340920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.093 [2024-11-19 12:28:54.340968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:49.093 [2024-11-19 12:28:54.340978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.093 [2024-11-19 12:28:54.343167] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.093 [2024-11-19 12:28:54.343207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:49.093 BaseBdev1 00:08:49.093 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.093 12:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:49.093 12:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:49.093 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.093 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.354 BaseBdev2_malloc 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.354 true 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.354 [2024-11-19 12:28:54.390523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:49.354 [2024-11-19 12:28:54.390583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.354 [2024-11-19 12:28:54.390602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:49.354 [2024-11-19 12:28:54.390611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.354 [2024-11-19 12:28:54.392691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.354 [2024-11-19 12:28:54.392729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:49.354 BaseBdev2 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.354 BaseBdev3_malloc 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.354 true 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.354 [2024-11-19 12:28:54.431227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:49.354 [2024-11-19 12:28:54.431282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.354 [2024-11-19 12:28:54.431302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:49.354 [2024-11-19 12:28:54.431311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.354 [2024-11-19 12:28:54.433419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.354 [2024-11-19 12:28:54.433458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:49.354 BaseBdev3 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.354 [2024-11-19 12:28:54.443267] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:49.354 [2024-11-19 12:28:54.445189] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:49.354 [2024-11-19 12:28:54.445265] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:49.354 [2024-11-19 12:28:54.445429] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:49.354 [2024-11-19 12:28:54.445448] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:49.354 [2024-11-19 12:28:54.445715] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:49.354 [2024-11-19 12:28:54.445876] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:49.354 [2024-11-19 12:28:54.445888] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:49.354 [2024-11-19 12:28:54.446021] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.354 "name": "raid_bdev1", 00:08:49.354 "uuid": "75bf20a9-87d9-4a0b-b74c-49f1ec833528", 00:08:49.354 "strip_size_kb": 64, 00:08:49.354 "state": "online", 00:08:49.354 "raid_level": "raid0", 00:08:49.354 "superblock": true, 00:08:49.354 "num_base_bdevs": 3, 00:08:49.354 "num_base_bdevs_discovered": 3, 00:08:49.354 "num_base_bdevs_operational": 3, 00:08:49.354 "base_bdevs_list": [ 00:08:49.354 { 00:08:49.354 "name": "BaseBdev1", 00:08:49.354 "uuid": "e6e46d22-6beb-506f-b986-ac1d852350cc", 00:08:49.354 "is_configured": true, 00:08:49.354 "data_offset": 2048, 00:08:49.354 "data_size": 63488 00:08:49.354 }, 00:08:49.354 { 00:08:49.354 "name": "BaseBdev2", 00:08:49.354 "uuid": "6b65675f-ed2f-5602-b57d-0a8b2adec848", 00:08:49.354 "is_configured": true, 00:08:49.354 "data_offset": 2048, 00:08:49.354 "data_size": 63488 00:08:49.354 }, 00:08:49.354 { 00:08:49.354 "name": "BaseBdev3", 00:08:49.354 "uuid": "d556c510-3255-527c-aabf-6f197b620c16", 00:08:49.354 "is_configured": true, 00:08:49.354 "data_offset": 2048, 00:08:49.354 "data_size": 63488 00:08:49.354 } 00:08:49.354 ] 00:08:49.354 }' 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.354 12:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.614 12:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:49.614 12:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:49.873 [2024-11-19 12:28:54.946904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:50.814 12:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:50.814 12:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.814 12:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.814 12:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.814 12:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:50.814 12:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:50.814 12:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:50.814 12:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:50.814 12:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:50.814 12:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.814 12:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:50.814 12:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.814 12:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.814 12:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.814 12:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.814 12:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.814 12:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.814 12:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.814 12:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.814 12:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.814 12:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:50.814 12:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.814 12:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.814 "name": "raid_bdev1", 00:08:50.814 "uuid": "75bf20a9-87d9-4a0b-b74c-49f1ec833528", 00:08:50.814 "strip_size_kb": 64, 00:08:50.814 "state": "online", 00:08:50.814 "raid_level": "raid0", 00:08:50.814 "superblock": true, 00:08:50.814 "num_base_bdevs": 3, 00:08:50.814 "num_base_bdevs_discovered": 3, 00:08:50.814 "num_base_bdevs_operational": 3, 00:08:50.814 "base_bdevs_list": [ 00:08:50.814 { 00:08:50.814 "name": "BaseBdev1", 00:08:50.814 "uuid": "e6e46d22-6beb-506f-b986-ac1d852350cc", 00:08:50.814 "is_configured": true, 00:08:50.814 "data_offset": 2048, 00:08:50.814 "data_size": 63488 00:08:50.814 }, 00:08:50.814 { 00:08:50.814 "name": "BaseBdev2", 00:08:50.814 "uuid": "6b65675f-ed2f-5602-b57d-0a8b2adec848", 00:08:50.814 "is_configured": true, 00:08:50.814 "data_offset": 2048, 00:08:50.814 "data_size": 63488 00:08:50.814 }, 00:08:50.814 { 00:08:50.814 "name": "BaseBdev3", 00:08:50.814 "uuid": "d556c510-3255-527c-aabf-6f197b620c16", 00:08:50.814 "is_configured": true, 00:08:50.814 "data_offset": 2048, 00:08:50.814 "data_size": 63488 00:08:50.814 } 00:08:50.814 ] 00:08:50.814 }' 00:08:50.814 12:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.814 12:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.074 12:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:51.074 12:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.074 12:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.074 [2024-11-19 12:28:56.282688] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:51.074 [2024-11-19 12:28:56.282777] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:51.074 [2024-11-19 12:28:56.285401] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.074 [2024-11-19 12:28:56.285451] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.074 [2024-11-19 12:28:56.285486] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:51.074 [2024-11-19 12:28:56.285498] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:51.074 { 00:08:51.074 "results": [ 00:08:51.074 { 00:08:51.074 "job": "raid_bdev1", 00:08:51.074 "core_mask": "0x1", 00:08:51.074 "workload": "randrw", 00:08:51.074 "percentage": 50, 00:08:51.074 "status": "finished", 00:08:51.074 "queue_depth": 1, 00:08:51.074 "io_size": 131072, 00:08:51.074 "runtime": 1.336527, 00:08:51.074 "iops": 17157.154326100408, 00:08:51.074 "mibps": 2144.644290762551, 00:08:51.074 "io_failed": 1, 00:08:51.074 "io_timeout": 0, 00:08:51.074 "avg_latency_us": 80.82344505151741, 00:08:51.074 "min_latency_us": 21.016593886462882, 00:08:51.074 "max_latency_us": 1359.3711790393013 00:08:51.074 } 00:08:51.074 ], 00:08:51.074 "core_count": 1 00:08:51.074 } 00:08:51.074 12:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.074 12:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76789 00:08:51.074 12:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 76789 ']' 00:08:51.074 12:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 76789 00:08:51.074 12:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:51.074 12:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:51.074 12:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76789 00:08:51.074 12:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:51.074 12:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:51.074 12:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76789' 00:08:51.074 killing process with pid 76789 00:08:51.074 12:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 76789 00:08:51.074 [2024-11-19 12:28:56.329313] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:51.074 12:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 76789 00:08:51.334 [2024-11-19 12:28:56.355177] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:51.334 12:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.drU5US0IbY 00:08:51.334 12:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:51.334 12:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:51.593 12:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:08:51.593 12:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:51.593 12:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:51.593 12:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:51.593 12:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:08:51.593 00:08:51.593 real 0m3.229s 00:08:51.593 user 0m3.995s 00:08:51.593 sys 0m0.556s 00:08:51.593 12:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.593 12:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.593 ************************************ 00:08:51.593 END TEST raid_write_error_test 00:08:51.593 ************************************ 00:08:51.593 12:28:56 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:51.593 12:28:56 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:51.593 12:28:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:51.593 12:28:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.593 12:28:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:51.593 ************************************ 00:08:51.593 START TEST raid_state_function_test 00:08:51.593 ************************************ 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=76916 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 76916' 00:08:51.593 Process raid pid: 76916 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 76916 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 76916 ']' 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.593 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:51.594 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.594 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:51.594 12:28:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.594 [2024-11-19 12:28:56.766203] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:51.594 [2024-11-19 12:28:56.766423] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.853 [2024-11-19 12:28:56.926974] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.853 [2024-11-19 12:28:56.972078] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.853 [2024-11-19 12:28:57.013845] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.853 [2024-11-19 12:28:57.013973] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.428 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:52.428 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:52.428 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:52.428 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.428 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.428 [2024-11-19 12:28:57.611349] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:52.428 [2024-11-19 12:28:57.611483] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:52.428 [2024-11-19 12:28:57.611535] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:52.428 [2024-11-19 12:28:57.611560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:52.428 [2024-11-19 12:28:57.611579] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:52.428 [2024-11-19 12:28:57.611594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:52.428 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.428 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:52.428 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.428 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.428 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.428 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.428 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.428 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.428 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.428 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.428 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.428 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.428 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.428 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.428 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.428 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.428 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.428 "name": "Existed_Raid", 00:08:52.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.428 "strip_size_kb": 64, 00:08:52.428 "state": "configuring", 00:08:52.428 "raid_level": "concat", 00:08:52.428 "superblock": false, 00:08:52.428 "num_base_bdevs": 3, 00:08:52.428 "num_base_bdevs_discovered": 0, 00:08:52.428 "num_base_bdevs_operational": 3, 00:08:52.428 "base_bdevs_list": [ 00:08:52.428 { 00:08:52.428 "name": "BaseBdev1", 00:08:52.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.428 "is_configured": false, 00:08:52.428 "data_offset": 0, 00:08:52.428 "data_size": 0 00:08:52.428 }, 00:08:52.428 { 00:08:52.428 "name": "BaseBdev2", 00:08:52.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.428 "is_configured": false, 00:08:52.428 "data_offset": 0, 00:08:52.428 "data_size": 0 00:08:52.428 }, 00:08:52.428 { 00:08:52.428 "name": "BaseBdev3", 00:08:52.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.428 "is_configured": false, 00:08:52.428 "data_offset": 0, 00:08:52.428 "data_size": 0 00:08:52.428 } 00:08:52.428 ] 00:08:52.428 }' 00:08:52.428 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.428 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.012 12:28:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:53.012 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.012 12:28:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.012 [2024-11-19 12:28:58.006619] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:53.012 [2024-11-19 12:28:58.006665] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:53.012 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.012 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:53.012 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.012 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.012 [2024-11-19 12:28:58.018623] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:53.012 [2024-11-19 12:28:58.018674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:53.013 [2024-11-19 12:28:58.018683] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:53.013 [2024-11-19 12:28:58.018692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:53.013 [2024-11-19 12:28:58.018705] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:53.013 [2024-11-19 12:28:58.018714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.013 [2024-11-19 12:28:58.039644] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.013 BaseBdev1 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.013 [ 00:08:53.013 { 00:08:53.013 "name": "BaseBdev1", 00:08:53.013 "aliases": [ 00:08:53.013 "c321b9f3-3840-467f-a97c-62b335105897" 00:08:53.013 ], 00:08:53.013 "product_name": "Malloc disk", 00:08:53.013 "block_size": 512, 00:08:53.013 "num_blocks": 65536, 00:08:53.013 "uuid": "c321b9f3-3840-467f-a97c-62b335105897", 00:08:53.013 "assigned_rate_limits": { 00:08:53.013 "rw_ios_per_sec": 0, 00:08:53.013 "rw_mbytes_per_sec": 0, 00:08:53.013 "r_mbytes_per_sec": 0, 00:08:53.013 "w_mbytes_per_sec": 0 00:08:53.013 }, 00:08:53.013 "claimed": true, 00:08:53.013 "claim_type": "exclusive_write", 00:08:53.013 "zoned": false, 00:08:53.013 "supported_io_types": { 00:08:53.013 "read": true, 00:08:53.013 "write": true, 00:08:53.013 "unmap": true, 00:08:53.013 "flush": true, 00:08:53.013 "reset": true, 00:08:53.013 "nvme_admin": false, 00:08:53.013 "nvme_io": false, 00:08:53.013 "nvme_io_md": false, 00:08:53.013 "write_zeroes": true, 00:08:53.013 "zcopy": true, 00:08:53.013 "get_zone_info": false, 00:08:53.013 "zone_management": false, 00:08:53.013 "zone_append": false, 00:08:53.013 "compare": false, 00:08:53.013 "compare_and_write": false, 00:08:53.013 "abort": true, 00:08:53.013 "seek_hole": false, 00:08:53.013 "seek_data": false, 00:08:53.013 "copy": true, 00:08:53.013 "nvme_iov_md": false 00:08:53.013 }, 00:08:53.013 "memory_domains": [ 00:08:53.013 { 00:08:53.013 "dma_device_id": "system", 00:08:53.013 "dma_device_type": 1 00:08:53.013 }, 00:08:53.013 { 00:08:53.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.013 "dma_device_type": 2 00:08:53.013 } 00:08:53.013 ], 00:08:53.013 "driver_specific": {} 00:08:53.013 } 00:08:53.013 ] 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.013 "name": "Existed_Raid", 00:08:53.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.013 "strip_size_kb": 64, 00:08:53.013 "state": "configuring", 00:08:53.013 "raid_level": "concat", 00:08:53.013 "superblock": false, 00:08:53.013 "num_base_bdevs": 3, 00:08:53.013 "num_base_bdevs_discovered": 1, 00:08:53.013 "num_base_bdevs_operational": 3, 00:08:53.013 "base_bdevs_list": [ 00:08:53.013 { 00:08:53.013 "name": "BaseBdev1", 00:08:53.013 "uuid": "c321b9f3-3840-467f-a97c-62b335105897", 00:08:53.013 "is_configured": true, 00:08:53.013 "data_offset": 0, 00:08:53.013 "data_size": 65536 00:08:53.013 }, 00:08:53.013 { 00:08:53.013 "name": "BaseBdev2", 00:08:53.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.013 "is_configured": false, 00:08:53.013 "data_offset": 0, 00:08:53.013 "data_size": 0 00:08:53.013 }, 00:08:53.013 { 00:08:53.013 "name": "BaseBdev3", 00:08:53.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.013 "is_configured": false, 00:08:53.013 "data_offset": 0, 00:08:53.013 "data_size": 0 00:08:53.013 } 00:08:53.013 ] 00:08:53.013 }' 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.013 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.272 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:53.272 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.272 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.532 [2024-11-19 12:28:58.534887] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:53.532 [2024-11-19 12:28:58.535024] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:53.532 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.532 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:53.532 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.532 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.532 [2024-11-19 12:28:58.546899] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.532 [2024-11-19 12:28:58.548844] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:53.532 [2024-11-19 12:28:58.548916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:53.532 [2024-11-19 12:28:58.548945] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:53.532 [2024-11-19 12:28:58.548968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:53.532 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.532 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:53.532 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:53.532 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:53.532 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.532 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.532 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.532 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.532 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.532 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.532 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.532 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.532 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.532 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.532 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.532 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.532 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.532 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.532 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.532 "name": "Existed_Raid", 00:08:53.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.532 "strip_size_kb": 64, 00:08:53.532 "state": "configuring", 00:08:53.532 "raid_level": "concat", 00:08:53.532 "superblock": false, 00:08:53.532 "num_base_bdevs": 3, 00:08:53.532 "num_base_bdevs_discovered": 1, 00:08:53.532 "num_base_bdevs_operational": 3, 00:08:53.532 "base_bdevs_list": [ 00:08:53.532 { 00:08:53.532 "name": "BaseBdev1", 00:08:53.532 "uuid": "c321b9f3-3840-467f-a97c-62b335105897", 00:08:53.532 "is_configured": true, 00:08:53.532 "data_offset": 0, 00:08:53.532 "data_size": 65536 00:08:53.532 }, 00:08:53.532 { 00:08:53.532 "name": "BaseBdev2", 00:08:53.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.532 "is_configured": false, 00:08:53.532 "data_offset": 0, 00:08:53.532 "data_size": 0 00:08:53.532 }, 00:08:53.532 { 00:08:53.532 "name": "BaseBdev3", 00:08:53.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.532 "is_configured": false, 00:08:53.532 "data_offset": 0, 00:08:53.532 "data_size": 0 00:08:53.532 } 00:08:53.532 ] 00:08:53.532 }' 00:08:53.532 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.532 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.792 12:28:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:53.792 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.792 12:28:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.792 [2024-11-19 12:28:59.001353] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:53.792 BaseBdev2 00:08:53.792 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.792 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:53.792 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:53.792 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:53.792 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:53.792 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:53.792 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:53.792 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:53.792 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.792 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.792 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.792 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:53.792 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.792 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.792 [ 00:08:53.792 { 00:08:53.792 "name": "BaseBdev2", 00:08:53.792 "aliases": [ 00:08:53.792 "455d8019-0a88-4dbb-b39f-f4869e232abb" 00:08:53.792 ], 00:08:53.792 "product_name": "Malloc disk", 00:08:53.792 "block_size": 512, 00:08:53.792 "num_blocks": 65536, 00:08:53.792 "uuid": "455d8019-0a88-4dbb-b39f-f4869e232abb", 00:08:53.792 "assigned_rate_limits": { 00:08:53.792 "rw_ios_per_sec": 0, 00:08:53.792 "rw_mbytes_per_sec": 0, 00:08:53.792 "r_mbytes_per_sec": 0, 00:08:53.792 "w_mbytes_per_sec": 0 00:08:53.792 }, 00:08:53.792 "claimed": true, 00:08:53.792 "claim_type": "exclusive_write", 00:08:53.792 "zoned": false, 00:08:53.792 "supported_io_types": { 00:08:53.792 "read": true, 00:08:53.792 "write": true, 00:08:53.792 "unmap": true, 00:08:53.792 "flush": true, 00:08:53.792 "reset": true, 00:08:53.792 "nvme_admin": false, 00:08:53.792 "nvme_io": false, 00:08:53.792 "nvme_io_md": false, 00:08:53.792 "write_zeroes": true, 00:08:53.792 "zcopy": true, 00:08:53.792 "get_zone_info": false, 00:08:53.792 "zone_management": false, 00:08:53.792 "zone_append": false, 00:08:53.792 "compare": false, 00:08:53.792 "compare_and_write": false, 00:08:53.792 "abort": true, 00:08:53.792 "seek_hole": false, 00:08:53.792 "seek_data": false, 00:08:53.792 "copy": true, 00:08:53.792 "nvme_iov_md": false 00:08:53.792 }, 00:08:53.792 "memory_domains": [ 00:08:53.792 { 00:08:53.792 "dma_device_id": "system", 00:08:53.792 "dma_device_type": 1 00:08:53.792 }, 00:08:53.792 { 00:08:53.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.792 "dma_device_type": 2 00:08:53.792 } 00:08:53.792 ], 00:08:53.792 "driver_specific": {} 00:08:53.792 } 00:08:53.792 ] 00:08:53.792 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.793 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:53.793 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:53.793 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:53.793 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:53.793 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.793 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.793 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.793 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.793 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.793 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.793 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.793 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.793 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.793 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.793 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.793 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.793 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.052 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.052 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.052 "name": "Existed_Raid", 00:08:54.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.052 "strip_size_kb": 64, 00:08:54.052 "state": "configuring", 00:08:54.052 "raid_level": "concat", 00:08:54.052 "superblock": false, 00:08:54.052 "num_base_bdevs": 3, 00:08:54.052 "num_base_bdevs_discovered": 2, 00:08:54.052 "num_base_bdevs_operational": 3, 00:08:54.052 "base_bdevs_list": [ 00:08:54.052 { 00:08:54.052 "name": "BaseBdev1", 00:08:54.052 "uuid": "c321b9f3-3840-467f-a97c-62b335105897", 00:08:54.052 "is_configured": true, 00:08:54.052 "data_offset": 0, 00:08:54.052 "data_size": 65536 00:08:54.052 }, 00:08:54.052 { 00:08:54.052 "name": "BaseBdev2", 00:08:54.052 "uuid": "455d8019-0a88-4dbb-b39f-f4869e232abb", 00:08:54.052 "is_configured": true, 00:08:54.052 "data_offset": 0, 00:08:54.052 "data_size": 65536 00:08:54.052 }, 00:08:54.052 { 00:08:54.052 "name": "BaseBdev3", 00:08:54.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.052 "is_configured": false, 00:08:54.052 "data_offset": 0, 00:08:54.052 "data_size": 0 00:08:54.052 } 00:08:54.052 ] 00:08:54.052 }' 00:08:54.052 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.052 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.312 [2024-11-19 12:28:59.499571] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:54.312 [2024-11-19 12:28:59.499623] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:54.312 [2024-11-19 12:28:59.499635] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:54.312 [2024-11-19 12:28:59.499961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:54.312 [2024-11-19 12:28:59.500103] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:54.312 [2024-11-19 12:28:59.500123] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:54.312 [2024-11-19 12:28:59.500344] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.312 BaseBdev3 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.312 [ 00:08:54.312 { 00:08:54.312 "name": "BaseBdev3", 00:08:54.312 "aliases": [ 00:08:54.312 "ae94575c-f9ab-4e81-835c-5b2bb6939bad" 00:08:54.312 ], 00:08:54.312 "product_name": "Malloc disk", 00:08:54.312 "block_size": 512, 00:08:54.312 "num_blocks": 65536, 00:08:54.312 "uuid": "ae94575c-f9ab-4e81-835c-5b2bb6939bad", 00:08:54.312 "assigned_rate_limits": { 00:08:54.312 "rw_ios_per_sec": 0, 00:08:54.312 "rw_mbytes_per_sec": 0, 00:08:54.312 "r_mbytes_per_sec": 0, 00:08:54.312 "w_mbytes_per_sec": 0 00:08:54.312 }, 00:08:54.312 "claimed": true, 00:08:54.312 "claim_type": "exclusive_write", 00:08:54.312 "zoned": false, 00:08:54.312 "supported_io_types": { 00:08:54.312 "read": true, 00:08:54.312 "write": true, 00:08:54.312 "unmap": true, 00:08:54.312 "flush": true, 00:08:54.312 "reset": true, 00:08:54.312 "nvme_admin": false, 00:08:54.312 "nvme_io": false, 00:08:54.312 "nvme_io_md": false, 00:08:54.312 "write_zeroes": true, 00:08:54.312 "zcopy": true, 00:08:54.312 "get_zone_info": false, 00:08:54.312 "zone_management": false, 00:08:54.312 "zone_append": false, 00:08:54.312 "compare": false, 00:08:54.312 "compare_and_write": false, 00:08:54.312 "abort": true, 00:08:54.312 "seek_hole": false, 00:08:54.312 "seek_data": false, 00:08:54.312 "copy": true, 00:08:54.312 "nvme_iov_md": false 00:08:54.312 }, 00:08:54.312 "memory_domains": [ 00:08:54.312 { 00:08:54.312 "dma_device_id": "system", 00:08:54.312 "dma_device_type": 1 00:08:54.312 }, 00:08:54.312 { 00:08:54.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.312 "dma_device_type": 2 00:08:54.312 } 00:08:54.312 ], 00:08:54.312 "driver_specific": {} 00:08:54.312 } 00:08:54.312 ] 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.312 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.571 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.571 "name": "Existed_Raid", 00:08:54.571 "uuid": "af9bfbcb-4f54-4842-af4b-e2219a5792c9", 00:08:54.571 "strip_size_kb": 64, 00:08:54.571 "state": "online", 00:08:54.571 "raid_level": "concat", 00:08:54.571 "superblock": false, 00:08:54.571 "num_base_bdevs": 3, 00:08:54.571 "num_base_bdevs_discovered": 3, 00:08:54.571 "num_base_bdevs_operational": 3, 00:08:54.572 "base_bdevs_list": [ 00:08:54.572 { 00:08:54.572 "name": "BaseBdev1", 00:08:54.572 "uuid": "c321b9f3-3840-467f-a97c-62b335105897", 00:08:54.572 "is_configured": true, 00:08:54.572 "data_offset": 0, 00:08:54.572 "data_size": 65536 00:08:54.572 }, 00:08:54.572 { 00:08:54.572 "name": "BaseBdev2", 00:08:54.572 "uuid": "455d8019-0a88-4dbb-b39f-f4869e232abb", 00:08:54.572 "is_configured": true, 00:08:54.572 "data_offset": 0, 00:08:54.572 "data_size": 65536 00:08:54.572 }, 00:08:54.572 { 00:08:54.572 "name": "BaseBdev3", 00:08:54.572 "uuid": "ae94575c-f9ab-4e81-835c-5b2bb6939bad", 00:08:54.572 "is_configured": true, 00:08:54.572 "data_offset": 0, 00:08:54.572 "data_size": 65536 00:08:54.572 } 00:08:54.572 ] 00:08:54.572 }' 00:08:54.572 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.572 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.831 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:54.831 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:54.831 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:54.831 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:54.831 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:54.831 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:54.831 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:54.831 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.831 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.831 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:54.831 [2024-11-19 12:28:59.899185] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:54.831 12:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.831 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:54.831 "name": "Existed_Raid", 00:08:54.831 "aliases": [ 00:08:54.831 "af9bfbcb-4f54-4842-af4b-e2219a5792c9" 00:08:54.831 ], 00:08:54.831 "product_name": "Raid Volume", 00:08:54.831 "block_size": 512, 00:08:54.831 "num_blocks": 196608, 00:08:54.831 "uuid": "af9bfbcb-4f54-4842-af4b-e2219a5792c9", 00:08:54.831 "assigned_rate_limits": { 00:08:54.831 "rw_ios_per_sec": 0, 00:08:54.832 "rw_mbytes_per_sec": 0, 00:08:54.832 "r_mbytes_per_sec": 0, 00:08:54.832 "w_mbytes_per_sec": 0 00:08:54.832 }, 00:08:54.832 "claimed": false, 00:08:54.832 "zoned": false, 00:08:54.832 "supported_io_types": { 00:08:54.832 "read": true, 00:08:54.832 "write": true, 00:08:54.832 "unmap": true, 00:08:54.832 "flush": true, 00:08:54.832 "reset": true, 00:08:54.832 "nvme_admin": false, 00:08:54.832 "nvme_io": false, 00:08:54.832 "nvme_io_md": false, 00:08:54.832 "write_zeroes": true, 00:08:54.832 "zcopy": false, 00:08:54.832 "get_zone_info": false, 00:08:54.832 "zone_management": false, 00:08:54.832 "zone_append": false, 00:08:54.832 "compare": false, 00:08:54.832 "compare_and_write": false, 00:08:54.832 "abort": false, 00:08:54.832 "seek_hole": false, 00:08:54.832 "seek_data": false, 00:08:54.832 "copy": false, 00:08:54.832 "nvme_iov_md": false 00:08:54.832 }, 00:08:54.832 "memory_domains": [ 00:08:54.832 { 00:08:54.832 "dma_device_id": "system", 00:08:54.832 "dma_device_type": 1 00:08:54.832 }, 00:08:54.832 { 00:08:54.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.832 "dma_device_type": 2 00:08:54.832 }, 00:08:54.832 { 00:08:54.832 "dma_device_id": "system", 00:08:54.832 "dma_device_type": 1 00:08:54.832 }, 00:08:54.832 { 00:08:54.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.832 "dma_device_type": 2 00:08:54.832 }, 00:08:54.832 { 00:08:54.832 "dma_device_id": "system", 00:08:54.832 "dma_device_type": 1 00:08:54.832 }, 00:08:54.832 { 00:08:54.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.832 "dma_device_type": 2 00:08:54.832 } 00:08:54.832 ], 00:08:54.832 "driver_specific": { 00:08:54.832 "raid": { 00:08:54.832 "uuid": "af9bfbcb-4f54-4842-af4b-e2219a5792c9", 00:08:54.832 "strip_size_kb": 64, 00:08:54.832 "state": "online", 00:08:54.832 "raid_level": "concat", 00:08:54.832 "superblock": false, 00:08:54.832 "num_base_bdevs": 3, 00:08:54.832 "num_base_bdevs_discovered": 3, 00:08:54.832 "num_base_bdevs_operational": 3, 00:08:54.832 "base_bdevs_list": [ 00:08:54.832 { 00:08:54.832 "name": "BaseBdev1", 00:08:54.832 "uuid": "c321b9f3-3840-467f-a97c-62b335105897", 00:08:54.832 "is_configured": true, 00:08:54.832 "data_offset": 0, 00:08:54.832 "data_size": 65536 00:08:54.832 }, 00:08:54.832 { 00:08:54.832 "name": "BaseBdev2", 00:08:54.832 "uuid": "455d8019-0a88-4dbb-b39f-f4869e232abb", 00:08:54.832 "is_configured": true, 00:08:54.832 "data_offset": 0, 00:08:54.832 "data_size": 65536 00:08:54.832 }, 00:08:54.832 { 00:08:54.832 "name": "BaseBdev3", 00:08:54.832 "uuid": "ae94575c-f9ab-4e81-835c-5b2bb6939bad", 00:08:54.832 "is_configured": true, 00:08:54.832 "data_offset": 0, 00:08:54.832 "data_size": 65536 00:08:54.832 } 00:08:54.832 ] 00:08:54.832 } 00:08:54.832 } 00:08:54.832 }' 00:08:54.832 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:54.832 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:54.832 BaseBdev2 00:08:54.832 BaseBdev3' 00:08:54.832 12:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.832 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:54.832 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.832 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:54.832 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.832 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.832 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.832 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.832 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.832 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.832 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.832 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.832 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:54.832 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.832 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.832 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.092 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.092 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.092 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.092 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:55.092 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.092 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.092 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.092 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.092 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.092 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.092 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:55.092 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.092 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.092 [2024-11-19 12:29:00.182512] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:55.092 [2024-11-19 12:29:00.182589] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:55.092 [2024-11-19 12:29:00.182679] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.092 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.092 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:55.092 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:55.092 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:55.092 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:55.092 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:55.092 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:55.092 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.092 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:55.092 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.092 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.092 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:55.092 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.092 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.092 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.093 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.093 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.093 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.093 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.093 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.093 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.093 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.093 "name": "Existed_Raid", 00:08:55.093 "uuid": "af9bfbcb-4f54-4842-af4b-e2219a5792c9", 00:08:55.093 "strip_size_kb": 64, 00:08:55.093 "state": "offline", 00:08:55.093 "raid_level": "concat", 00:08:55.093 "superblock": false, 00:08:55.093 "num_base_bdevs": 3, 00:08:55.093 "num_base_bdevs_discovered": 2, 00:08:55.093 "num_base_bdevs_operational": 2, 00:08:55.093 "base_bdevs_list": [ 00:08:55.093 { 00:08:55.093 "name": null, 00:08:55.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.093 "is_configured": false, 00:08:55.093 "data_offset": 0, 00:08:55.093 "data_size": 65536 00:08:55.093 }, 00:08:55.093 { 00:08:55.093 "name": "BaseBdev2", 00:08:55.093 "uuid": "455d8019-0a88-4dbb-b39f-f4869e232abb", 00:08:55.093 "is_configured": true, 00:08:55.093 "data_offset": 0, 00:08:55.093 "data_size": 65536 00:08:55.093 }, 00:08:55.093 { 00:08:55.093 "name": "BaseBdev3", 00:08:55.093 "uuid": "ae94575c-f9ab-4e81-835c-5b2bb6939bad", 00:08:55.093 "is_configured": true, 00:08:55.093 "data_offset": 0, 00:08:55.093 "data_size": 65536 00:08:55.093 } 00:08:55.093 ] 00:08:55.093 }' 00:08:55.093 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.093 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.663 [2024-11-19 12:29:00.696827] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.663 [2024-11-19 12:29:00.759696] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:55.663 [2024-11-19 12:29:00.759865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.663 BaseBdev2 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.663 [ 00:08:55.663 { 00:08:55.663 "name": "BaseBdev2", 00:08:55.663 "aliases": [ 00:08:55.663 "cafb8ac2-6261-4d5d-9dcb-0c497589e79c" 00:08:55.663 ], 00:08:55.663 "product_name": "Malloc disk", 00:08:55.663 "block_size": 512, 00:08:55.663 "num_blocks": 65536, 00:08:55.663 "uuid": "cafb8ac2-6261-4d5d-9dcb-0c497589e79c", 00:08:55.663 "assigned_rate_limits": { 00:08:55.663 "rw_ios_per_sec": 0, 00:08:55.663 "rw_mbytes_per_sec": 0, 00:08:55.663 "r_mbytes_per_sec": 0, 00:08:55.663 "w_mbytes_per_sec": 0 00:08:55.663 }, 00:08:55.663 "claimed": false, 00:08:55.663 "zoned": false, 00:08:55.663 "supported_io_types": { 00:08:55.663 "read": true, 00:08:55.663 "write": true, 00:08:55.663 "unmap": true, 00:08:55.663 "flush": true, 00:08:55.663 "reset": true, 00:08:55.663 "nvme_admin": false, 00:08:55.663 "nvme_io": false, 00:08:55.663 "nvme_io_md": false, 00:08:55.663 "write_zeroes": true, 00:08:55.663 "zcopy": true, 00:08:55.663 "get_zone_info": false, 00:08:55.663 "zone_management": false, 00:08:55.663 "zone_append": false, 00:08:55.663 "compare": false, 00:08:55.663 "compare_and_write": false, 00:08:55.663 "abort": true, 00:08:55.663 "seek_hole": false, 00:08:55.663 "seek_data": false, 00:08:55.663 "copy": true, 00:08:55.663 "nvme_iov_md": false 00:08:55.663 }, 00:08:55.663 "memory_domains": [ 00:08:55.663 { 00:08:55.663 "dma_device_id": "system", 00:08:55.663 "dma_device_type": 1 00:08:55.663 }, 00:08:55.663 { 00:08:55.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.663 "dma_device_type": 2 00:08:55.663 } 00:08:55.663 ], 00:08:55.663 "driver_specific": {} 00:08:55.663 } 00:08:55.663 ] 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.663 BaseBdev3 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.663 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:55.664 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:55.664 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:55.664 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:55.664 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:55.664 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:55.664 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:55.664 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.664 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.664 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.664 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:55.664 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.664 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.664 [ 00:08:55.664 { 00:08:55.664 "name": "BaseBdev3", 00:08:55.664 "aliases": [ 00:08:55.664 "02947556-11eb-4ef1-b226-1573f69a011f" 00:08:55.664 ], 00:08:55.664 "product_name": "Malloc disk", 00:08:55.664 "block_size": 512, 00:08:55.664 "num_blocks": 65536, 00:08:55.664 "uuid": "02947556-11eb-4ef1-b226-1573f69a011f", 00:08:55.923 "assigned_rate_limits": { 00:08:55.923 "rw_ios_per_sec": 0, 00:08:55.923 "rw_mbytes_per_sec": 0, 00:08:55.923 "r_mbytes_per_sec": 0, 00:08:55.923 "w_mbytes_per_sec": 0 00:08:55.923 }, 00:08:55.923 "claimed": false, 00:08:55.923 "zoned": false, 00:08:55.923 "supported_io_types": { 00:08:55.923 "read": true, 00:08:55.923 "write": true, 00:08:55.923 "unmap": true, 00:08:55.923 "flush": true, 00:08:55.923 "reset": true, 00:08:55.923 "nvme_admin": false, 00:08:55.923 "nvme_io": false, 00:08:55.923 "nvme_io_md": false, 00:08:55.923 "write_zeroes": true, 00:08:55.923 "zcopy": true, 00:08:55.923 "get_zone_info": false, 00:08:55.923 "zone_management": false, 00:08:55.923 "zone_append": false, 00:08:55.923 "compare": false, 00:08:55.923 "compare_and_write": false, 00:08:55.923 "abort": true, 00:08:55.923 "seek_hole": false, 00:08:55.923 "seek_data": false, 00:08:55.923 "copy": true, 00:08:55.923 "nvme_iov_md": false 00:08:55.923 }, 00:08:55.923 "memory_domains": [ 00:08:55.923 { 00:08:55.923 "dma_device_id": "system", 00:08:55.923 "dma_device_type": 1 00:08:55.923 }, 00:08:55.923 { 00:08:55.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.923 "dma_device_type": 2 00:08:55.923 } 00:08:55.923 ], 00:08:55.923 "driver_specific": {} 00:08:55.923 } 00:08:55.923 ] 00:08:55.923 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.923 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:55.923 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:55.923 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:55.923 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:55.923 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.923 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.924 [2024-11-19 12:29:00.937674] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:55.924 [2024-11-19 12:29:00.937811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:55.924 [2024-11-19 12:29:00.937854] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:55.924 [2024-11-19 12:29:00.939804] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:55.924 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.924 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:55.924 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.924 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.924 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.924 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.924 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.924 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.924 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.924 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.924 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.924 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.924 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.924 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.924 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.924 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.924 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.924 "name": "Existed_Raid", 00:08:55.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.924 "strip_size_kb": 64, 00:08:55.924 "state": "configuring", 00:08:55.924 "raid_level": "concat", 00:08:55.924 "superblock": false, 00:08:55.924 "num_base_bdevs": 3, 00:08:55.924 "num_base_bdevs_discovered": 2, 00:08:55.924 "num_base_bdevs_operational": 3, 00:08:55.924 "base_bdevs_list": [ 00:08:55.924 { 00:08:55.924 "name": "BaseBdev1", 00:08:55.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.924 "is_configured": false, 00:08:55.924 "data_offset": 0, 00:08:55.924 "data_size": 0 00:08:55.924 }, 00:08:55.924 { 00:08:55.924 "name": "BaseBdev2", 00:08:55.924 "uuid": "cafb8ac2-6261-4d5d-9dcb-0c497589e79c", 00:08:55.924 "is_configured": true, 00:08:55.924 "data_offset": 0, 00:08:55.924 "data_size": 65536 00:08:55.924 }, 00:08:55.924 { 00:08:55.924 "name": "BaseBdev3", 00:08:55.924 "uuid": "02947556-11eb-4ef1-b226-1573f69a011f", 00:08:55.924 "is_configured": true, 00:08:55.924 "data_offset": 0, 00:08:55.924 "data_size": 65536 00:08:55.924 } 00:08:55.924 ] 00:08:55.924 }' 00:08:55.924 12:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.924 12:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.184 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:56.184 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.184 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.184 [2024-11-19 12:29:01.416882] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:56.184 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.184 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:56.184 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.184 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.184 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.184 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.184 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.184 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.184 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.184 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.184 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.184 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.184 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.184 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.184 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.445 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.445 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.445 "name": "Existed_Raid", 00:08:56.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.445 "strip_size_kb": 64, 00:08:56.445 "state": "configuring", 00:08:56.445 "raid_level": "concat", 00:08:56.445 "superblock": false, 00:08:56.445 "num_base_bdevs": 3, 00:08:56.445 "num_base_bdevs_discovered": 1, 00:08:56.445 "num_base_bdevs_operational": 3, 00:08:56.445 "base_bdevs_list": [ 00:08:56.445 { 00:08:56.445 "name": "BaseBdev1", 00:08:56.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.445 "is_configured": false, 00:08:56.445 "data_offset": 0, 00:08:56.445 "data_size": 0 00:08:56.445 }, 00:08:56.445 { 00:08:56.445 "name": null, 00:08:56.445 "uuid": "cafb8ac2-6261-4d5d-9dcb-0c497589e79c", 00:08:56.445 "is_configured": false, 00:08:56.445 "data_offset": 0, 00:08:56.445 "data_size": 65536 00:08:56.445 }, 00:08:56.445 { 00:08:56.445 "name": "BaseBdev3", 00:08:56.445 "uuid": "02947556-11eb-4ef1-b226-1573f69a011f", 00:08:56.445 "is_configured": true, 00:08:56.445 "data_offset": 0, 00:08:56.445 "data_size": 65536 00:08:56.445 } 00:08:56.445 ] 00:08:56.445 }' 00:08:56.445 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.445 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.703 BaseBdev1 00:08:56.703 [2024-11-19 12:29:01.835095] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.703 [ 00:08:56.703 { 00:08:56.703 "name": "BaseBdev1", 00:08:56.703 "aliases": [ 00:08:56.703 "a060bfab-820e-4d88-bfd2-76581f1bc9d7" 00:08:56.703 ], 00:08:56.703 "product_name": "Malloc disk", 00:08:56.703 "block_size": 512, 00:08:56.703 "num_blocks": 65536, 00:08:56.703 "uuid": "a060bfab-820e-4d88-bfd2-76581f1bc9d7", 00:08:56.703 "assigned_rate_limits": { 00:08:56.703 "rw_ios_per_sec": 0, 00:08:56.703 "rw_mbytes_per_sec": 0, 00:08:56.703 "r_mbytes_per_sec": 0, 00:08:56.703 "w_mbytes_per_sec": 0 00:08:56.703 }, 00:08:56.703 "claimed": true, 00:08:56.703 "claim_type": "exclusive_write", 00:08:56.703 "zoned": false, 00:08:56.703 "supported_io_types": { 00:08:56.703 "read": true, 00:08:56.703 "write": true, 00:08:56.703 "unmap": true, 00:08:56.703 "flush": true, 00:08:56.703 "reset": true, 00:08:56.703 "nvme_admin": false, 00:08:56.703 "nvme_io": false, 00:08:56.703 "nvme_io_md": false, 00:08:56.703 "write_zeroes": true, 00:08:56.703 "zcopy": true, 00:08:56.703 "get_zone_info": false, 00:08:56.703 "zone_management": false, 00:08:56.703 "zone_append": false, 00:08:56.703 "compare": false, 00:08:56.703 "compare_and_write": false, 00:08:56.703 "abort": true, 00:08:56.703 "seek_hole": false, 00:08:56.703 "seek_data": false, 00:08:56.703 "copy": true, 00:08:56.703 "nvme_iov_md": false 00:08:56.703 }, 00:08:56.703 "memory_domains": [ 00:08:56.703 { 00:08:56.703 "dma_device_id": "system", 00:08:56.703 "dma_device_type": 1 00:08:56.703 }, 00:08:56.703 { 00:08:56.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.703 "dma_device_type": 2 00:08:56.703 } 00:08:56.703 ], 00:08:56.703 "driver_specific": {} 00:08:56.703 } 00:08:56.703 ] 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.703 "name": "Existed_Raid", 00:08:56.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.703 "strip_size_kb": 64, 00:08:56.703 "state": "configuring", 00:08:56.703 "raid_level": "concat", 00:08:56.703 "superblock": false, 00:08:56.703 "num_base_bdevs": 3, 00:08:56.703 "num_base_bdevs_discovered": 2, 00:08:56.703 "num_base_bdevs_operational": 3, 00:08:56.703 "base_bdevs_list": [ 00:08:56.703 { 00:08:56.703 "name": "BaseBdev1", 00:08:56.703 "uuid": "a060bfab-820e-4d88-bfd2-76581f1bc9d7", 00:08:56.703 "is_configured": true, 00:08:56.703 "data_offset": 0, 00:08:56.703 "data_size": 65536 00:08:56.703 }, 00:08:56.703 { 00:08:56.703 "name": null, 00:08:56.703 "uuid": "cafb8ac2-6261-4d5d-9dcb-0c497589e79c", 00:08:56.703 "is_configured": false, 00:08:56.703 "data_offset": 0, 00:08:56.703 "data_size": 65536 00:08:56.703 }, 00:08:56.703 { 00:08:56.703 "name": "BaseBdev3", 00:08:56.703 "uuid": "02947556-11eb-4ef1-b226-1573f69a011f", 00:08:56.703 "is_configured": true, 00:08:56.703 "data_offset": 0, 00:08:56.703 "data_size": 65536 00:08:56.703 } 00:08:56.703 ] 00:08:56.703 }' 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.703 12:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.272 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.272 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.272 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.272 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:57.272 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.272 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:57.272 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:57.272 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.272 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.272 [2024-11-19 12:29:02.326294] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:57.272 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.272 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:57.272 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.272 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.272 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.272 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.272 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.272 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.272 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.272 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.272 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.272 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.272 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.272 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.273 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.273 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.273 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.273 "name": "Existed_Raid", 00:08:57.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.273 "strip_size_kb": 64, 00:08:57.273 "state": "configuring", 00:08:57.273 "raid_level": "concat", 00:08:57.273 "superblock": false, 00:08:57.273 "num_base_bdevs": 3, 00:08:57.273 "num_base_bdevs_discovered": 1, 00:08:57.273 "num_base_bdevs_operational": 3, 00:08:57.273 "base_bdevs_list": [ 00:08:57.273 { 00:08:57.273 "name": "BaseBdev1", 00:08:57.273 "uuid": "a060bfab-820e-4d88-bfd2-76581f1bc9d7", 00:08:57.273 "is_configured": true, 00:08:57.273 "data_offset": 0, 00:08:57.273 "data_size": 65536 00:08:57.273 }, 00:08:57.273 { 00:08:57.273 "name": null, 00:08:57.273 "uuid": "cafb8ac2-6261-4d5d-9dcb-0c497589e79c", 00:08:57.273 "is_configured": false, 00:08:57.273 "data_offset": 0, 00:08:57.273 "data_size": 65536 00:08:57.273 }, 00:08:57.273 { 00:08:57.273 "name": null, 00:08:57.273 "uuid": "02947556-11eb-4ef1-b226-1573f69a011f", 00:08:57.273 "is_configured": false, 00:08:57.273 "data_offset": 0, 00:08:57.273 "data_size": 65536 00:08:57.273 } 00:08:57.273 ] 00:08:57.273 }' 00:08:57.273 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.273 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.533 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.533 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.533 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.533 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:57.792 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.792 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:57.792 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:57.792 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.792 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.792 [2024-11-19 12:29:02.841586] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:57.792 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.792 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:57.792 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.792 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.792 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.792 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.792 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.792 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.792 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.792 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.792 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.792 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.792 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.792 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.792 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.792 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.792 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.792 "name": "Existed_Raid", 00:08:57.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.792 "strip_size_kb": 64, 00:08:57.792 "state": "configuring", 00:08:57.792 "raid_level": "concat", 00:08:57.792 "superblock": false, 00:08:57.792 "num_base_bdevs": 3, 00:08:57.792 "num_base_bdevs_discovered": 2, 00:08:57.792 "num_base_bdevs_operational": 3, 00:08:57.792 "base_bdevs_list": [ 00:08:57.792 { 00:08:57.792 "name": "BaseBdev1", 00:08:57.792 "uuid": "a060bfab-820e-4d88-bfd2-76581f1bc9d7", 00:08:57.792 "is_configured": true, 00:08:57.792 "data_offset": 0, 00:08:57.792 "data_size": 65536 00:08:57.792 }, 00:08:57.792 { 00:08:57.792 "name": null, 00:08:57.792 "uuid": "cafb8ac2-6261-4d5d-9dcb-0c497589e79c", 00:08:57.792 "is_configured": false, 00:08:57.792 "data_offset": 0, 00:08:57.792 "data_size": 65536 00:08:57.792 }, 00:08:57.792 { 00:08:57.792 "name": "BaseBdev3", 00:08:57.792 "uuid": "02947556-11eb-4ef1-b226-1573f69a011f", 00:08:57.792 "is_configured": true, 00:08:57.792 "data_offset": 0, 00:08:57.792 "data_size": 65536 00:08:57.792 } 00:08:57.792 ] 00:08:57.792 }' 00:08:57.792 12:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.792 12:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.051 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:58.051 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.051 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.052 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.052 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.052 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:58.052 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:58.052 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.052 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.052 [2024-11-19 12:29:03.280827] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:58.052 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.052 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:58.052 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.052 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.052 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.052 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.052 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.052 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.052 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.052 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.052 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.052 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.052 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.052 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.052 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.312 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.312 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.312 "name": "Existed_Raid", 00:08:58.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.312 "strip_size_kb": 64, 00:08:58.312 "state": "configuring", 00:08:58.312 "raid_level": "concat", 00:08:58.312 "superblock": false, 00:08:58.312 "num_base_bdevs": 3, 00:08:58.312 "num_base_bdevs_discovered": 1, 00:08:58.312 "num_base_bdevs_operational": 3, 00:08:58.312 "base_bdevs_list": [ 00:08:58.312 { 00:08:58.312 "name": null, 00:08:58.312 "uuid": "a060bfab-820e-4d88-bfd2-76581f1bc9d7", 00:08:58.312 "is_configured": false, 00:08:58.312 "data_offset": 0, 00:08:58.312 "data_size": 65536 00:08:58.312 }, 00:08:58.312 { 00:08:58.312 "name": null, 00:08:58.312 "uuid": "cafb8ac2-6261-4d5d-9dcb-0c497589e79c", 00:08:58.312 "is_configured": false, 00:08:58.312 "data_offset": 0, 00:08:58.312 "data_size": 65536 00:08:58.312 }, 00:08:58.312 { 00:08:58.312 "name": "BaseBdev3", 00:08:58.312 "uuid": "02947556-11eb-4ef1-b226-1573f69a011f", 00:08:58.312 "is_configured": true, 00:08:58.312 "data_offset": 0, 00:08:58.312 "data_size": 65536 00:08:58.312 } 00:08:58.312 ] 00:08:58.312 }' 00:08:58.312 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.312 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.572 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.572 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:58.572 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.572 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.572 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.572 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:58.572 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:58.572 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.572 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.572 [2024-11-19 12:29:03.790325] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:58.572 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.572 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:58.572 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.572 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.572 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.572 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.572 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.572 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.572 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.572 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.572 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.572 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.572 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.572 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.572 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.572 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.832 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.832 "name": "Existed_Raid", 00:08:58.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.832 "strip_size_kb": 64, 00:08:58.832 "state": "configuring", 00:08:58.832 "raid_level": "concat", 00:08:58.832 "superblock": false, 00:08:58.832 "num_base_bdevs": 3, 00:08:58.832 "num_base_bdevs_discovered": 2, 00:08:58.832 "num_base_bdevs_operational": 3, 00:08:58.832 "base_bdevs_list": [ 00:08:58.832 { 00:08:58.832 "name": null, 00:08:58.832 "uuid": "a060bfab-820e-4d88-bfd2-76581f1bc9d7", 00:08:58.832 "is_configured": false, 00:08:58.832 "data_offset": 0, 00:08:58.832 "data_size": 65536 00:08:58.832 }, 00:08:58.832 { 00:08:58.832 "name": "BaseBdev2", 00:08:58.832 "uuid": "cafb8ac2-6261-4d5d-9dcb-0c497589e79c", 00:08:58.832 "is_configured": true, 00:08:58.832 "data_offset": 0, 00:08:58.832 "data_size": 65536 00:08:58.832 }, 00:08:58.832 { 00:08:58.832 "name": "BaseBdev3", 00:08:58.832 "uuid": "02947556-11eb-4ef1-b226-1573f69a011f", 00:08:58.832 "is_configured": true, 00:08:58.832 "data_offset": 0, 00:08:58.832 "data_size": 65536 00:08:58.832 } 00:08:58.832 ] 00:08:58.832 }' 00:08:58.832 12:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.832 12:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a060bfab-820e-4d88-bfd2-76581f1bc9d7 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.093 [2024-11-19 12:29:04.264403] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:59.093 [2024-11-19 12:29:04.264539] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:59.093 [2024-11-19 12:29:04.264554] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:59.093 [2024-11-19 12:29:04.264866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:59.093 [2024-11-19 12:29:04.265003] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:59.093 [2024-11-19 12:29:04.265013] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:59.093 [2024-11-19 12:29:04.265204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.093 NewBaseBdev 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.093 [ 00:08:59.093 { 00:08:59.093 "name": "NewBaseBdev", 00:08:59.093 "aliases": [ 00:08:59.093 "a060bfab-820e-4d88-bfd2-76581f1bc9d7" 00:08:59.093 ], 00:08:59.093 "product_name": "Malloc disk", 00:08:59.093 "block_size": 512, 00:08:59.093 "num_blocks": 65536, 00:08:59.093 "uuid": "a060bfab-820e-4d88-bfd2-76581f1bc9d7", 00:08:59.093 "assigned_rate_limits": { 00:08:59.093 "rw_ios_per_sec": 0, 00:08:59.093 "rw_mbytes_per_sec": 0, 00:08:59.093 "r_mbytes_per_sec": 0, 00:08:59.093 "w_mbytes_per_sec": 0 00:08:59.093 }, 00:08:59.093 "claimed": true, 00:08:59.093 "claim_type": "exclusive_write", 00:08:59.093 "zoned": false, 00:08:59.093 "supported_io_types": { 00:08:59.093 "read": true, 00:08:59.093 "write": true, 00:08:59.093 "unmap": true, 00:08:59.093 "flush": true, 00:08:59.093 "reset": true, 00:08:59.093 "nvme_admin": false, 00:08:59.093 "nvme_io": false, 00:08:59.093 "nvme_io_md": false, 00:08:59.093 "write_zeroes": true, 00:08:59.093 "zcopy": true, 00:08:59.093 "get_zone_info": false, 00:08:59.093 "zone_management": false, 00:08:59.093 "zone_append": false, 00:08:59.093 "compare": false, 00:08:59.093 "compare_and_write": false, 00:08:59.093 "abort": true, 00:08:59.093 "seek_hole": false, 00:08:59.093 "seek_data": false, 00:08:59.093 "copy": true, 00:08:59.093 "nvme_iov_md": false 00:08:59.093 }, 00:08:59.093 "memory_domains": [ 00:08:59.093 { 00:08:59.093 "dma_device_id": "system", 00:08:59.093 "dma_device_type": 1 00:08:59.093 }, 00:08:59.093 { 00:08:59.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.093 "dma_device_type": 2 00:08:59.093 } 00:08:59.093 ], 00:08:59.093 "driver_specific": {} 00:08:59.093 } 00:08:59.093 ] 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.093 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.353 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.353 "name": "Existed_Raid", 00:08:59.353 "uuid": "447d7f54-6521-4805-a1cb-1790faf0f1ac", 00:08:59.353 "strip_size_kb": 64, 00:08:59.353 "state": "online", 00:08:59.353 "raid_level": "concat", 00:08:59.353 "superblock": false, 00:08:59.353 "num_base_bdevs": 3, 00:08:59.353 "num_base_bdevs_discovered": 3, 00:08:59.353 "num_base_bdevs_operational": 3, 00:08:59.353 "base_bdevs_list": [ 00:08:59.353 { 00:08:59.353 "name": "NewBaseBdev", 00:08:59.353 "uuid": "a060bfab-820e-4d88-bfd2-76581f1bc9d7", 00:08:59.353 "is_configured": true, 00:08:59.353 "data_offset": 0, 00:08:59.353 "data_size": 65536 00:08:59.353 }, 00:08:59.353 { 00:08:59.353 "name": "BaseBdev2", 00:08:59.353 "uuid": "cafb8ac2-6261-4d5d-9dcb-0c497589e79c", 00:08:59.353 "is_configured": true, 00:08:59.353 "data_offset": 0, 00:08:59.353 "data_size": 65536 00:08:59.353 }, 00:08:59.353 { 00:08:59.353 "name": "BaseBdev3", 00:08:59.353 "uuid": "02947556-11eb-4ef1-b226-1573f69a011f", 00:08:59.353 "is_configured": true, 00:08:59.353 "data_offset": 0, 00:08:59.353 "data_size": 65536 00:08:59.353 } 00:08:59.353 ] 00:08:59.353 }' 00:08:59.353 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.353 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.612 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:59.612 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:59.612 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:59.612 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:59.612 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:59.612 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:59.612 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:59.612 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:59.612 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.612 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.612 [2024-11-19 12:29:04.807862] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:59.612 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.612 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:59.612 "name": "Existed_Raid", 00:08:59.612 "aliases": [ 00:08:59.612 "447d7f54-6521-4805-a1cb-1790faf0f1ac" 00:08:59.612 ], 00:08:59.612 "product_name": "Raid Volume", 00:08:59.612 "block_size": 512, 00:08:59.612 "num_blocks": 196608, 00:08:59.612 "uuid": "447d7f54-6521-4805-a1cb-1790faf0f1ac", 00:08:59.612 "assigned_rate_limits": { 00:08:59.612 "rw_ios_per_sec": 0, 00:08:59.612 "rw_mbytes_per_sec": 0, 00:08:59.612 "r_mbytes_per_sec": 0, 00:08:59.612 "w_mbytes_per_sec": 0 00:08:59.612 }, 00:08:59.612 "claimed": false, 00:08:59.612 "zoned": false, 00:08:59.612 "supported_io_types": { 00:08:59.612 "read": true, 00:08:59.612 "write": true, 00:08:59.612 "unmap": true, 00:08:59.612 "flush": true, 00:08:59.612 "reset": true, 00:08:59.612 "nvme_admin": false, 00:08:59.612 "nvme_io": false, 00:08:59.612 "nvme_io_md": false, 00:08:59.612 "write_zeroes": true, 00:08:59.612 "zcopy": false, 00:08:59.612 "get_zone_info": false, 00:08:59.612 "zone_management": false, 00:08:59.612 "zone_append": false, 00:08:59.612 "compare": false, 00:08:59.612 "compare_and_write": false, 00:08:59.612 "abort": false, 00:08:59.612 "seek_hole": false, 00:08:59.612 "seek_data": false, 00:08:59.612 "copy": false, 00:08:59.612 "nvme_iov_md": false 00:08:59.612 }, 00:08:59.612 "memory_domains": [ 00:08:59.612 { 00:08:59.612 "dma_device_id": "system", 00:08:59.612 "dma_device_type": 1 00:08:59.612 }, 00:08:59.612 { 00:08:59.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.612 "dma_device_type": 2 00:08:59.612 }, 00:08:59.612 { 00:08:59.612 "dma_device_id": "system", 00:08:59.612 "dma_device_type": 1 00:08:59.612 }, 00:08:59.612 { 00:08:59.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.612 "dma_device_type": 2 00:08:59.612 }, 00:08:59.612 { 00:08:59.612 "dma_device_id": "system", 00:08:59.612 "dma_device_type": 1 00:08:59.612 }, 00:08:59.612 { 00:08:59.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.612 "dma_device_type": 2 00:08:59.612 } 00:08:59.612 ], 00:08:59.612 "driver_specific": { 00:08:59.612 "raid": { 00:08:59.612 "uuid": "447d7f54-6521-4805-a1cb-1790faf0f1ac", 00:08:59.612 "strip_size_kb": 64, 00:08:59.612 "state": "online", 00:08:59.612 "raid_level": "concat", 00:08:59.612 "superblock": false, 00:08:59.612 "num_base_bdevs": 3, 00:08:59.612 "num_base_bdevs_discovered": 3, 00:08:59.612 "num_base_bdevs_operational": 3, 00:08:59.612 "base_bdevs_list": [ 00:08:59.612 { 00:08:59.612 "name": "NewBaseBdev", 00:08:59.612 "uuid": "a060bfab-820e-4d88-bfd2-76581f1bc9d7", 00:08:59.612 "is_configured": true, 00:08:59.612 "data_offset": 0, 00:08:59.612 "data_size": 65536 00:08:59.612 }, 00:08:59.612 { 00:08:59.612 "name": "BaseBdev2", 00:08:59.612 "uuid": "cafb8ac2-6261-4d5d-9dcb-0c497589e79c", 00:08:59.612 "is_configured": true, 00:08:59.612 "data_offset": 0, 00:08:59.612 "data_size": 65536 00:08:59.612 }, 00:08:59.612 { 00:08:59.612 "name": "BaseBdev3", 00:08:59.612 "uuid": "02947556-11eb-4ef1-b226-1573f69a011f", 00:08:59.612 "is_configured": true, 00:08:59.612 "data_offset": 0, 00:08:59.612 "data_size": 65536 00:08:59.612 } 00:08:59.612 ] 00:08:59.612 } 00:08:59.612 } 00:08:59.612 }' 00:08:59.612 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:59.872 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:59.872 BaseBdev2 00:08:59.872 BaseBdev3' 00:08:59.872 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.872 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:59.872 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.872 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.872 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:59.873 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.873 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.873 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.873 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.873 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.873 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.873 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:59.873 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.873 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.873 12:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.873 12:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.873 12:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.873 12:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.873 12:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.873 12:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:59.873 12:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.873 12:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.873 12:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.873 12:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.873 12:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.873 12:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.873 12:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:59.873 12:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.873 12:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.873 [2024-11-19 12:29:05.087109] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:59.873 [2024-11-19 12:29:05.087154] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:59.873 [2024-11-19 12:29:05.087246] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:59.873 [2024-11-19 12:29:05.087304] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:59.873 [2024-11-19 12:29:05.087318] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:59.873 12:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.873 12:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 76916 00:08:59.873 12:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 76916 ']' 00:08:59.873 12:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 76916 00:08:59.873 12:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:59.873 12:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:59.873 12:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76916 00:09:00.132 12:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:00.132 12:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:00.132 12:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76916' 00:09:00.132 killing process with pid 76916 00:09:00.132 12:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 76916 00:09:00.132 [2024-11-19 12:29:05.141217] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:00.132 12:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 76916 00:09:00.132 [2024-11-19 12:29:05.172221] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:00.390 12:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:00.390 00:09:00.390 real 0m8.737s 00:09:00.390 user 0m14.868s 00:09:00.390 sys 0m1.802s 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.391 ************************************ 00:09:00.391 END TEST raid_state_function_test 00:09:00.391 ************************************ 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.391 12:29:05 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:00.391 12:29:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:00.391 12:29:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.391 12:29:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:00.391 ************************************ 00:09:00.391 START TEST raid_state_function_test_sb 00:09:00.391 ************************************ 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77521 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:00.391 Process raid pid: 77521 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77521' 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77521 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 77521 ']' 00:09:00.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:00.391 12:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.391 [2024-11-19 12:29:05.589111] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:00.391 [2024-11-19 12:29:05.589262] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.650 [2024-11-19 12:29:05.754605] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.650 [2024-11-19 12:29:05.800129] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.650 [2024-11-19 12:29:05.841700] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.650 [2024-11-19 12:29:05.841738] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:01.218 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:01.218 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:01.218 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:01.218 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.218 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.218 [2024-11-19 12:29:06.406739] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:01.218 [2024-11-19 12:29:06.406823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:01.219 [2024-11-19 12:29:06.406838] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:01.219 [2024-11-19 12:29:06.406848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:01.219 [2024-11-19 12:29:06.406855] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:01.219 [2024-11-19 12:29:06.406866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:01.219 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.219 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:01.219 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.219 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.219 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.219 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.219 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.219 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.219 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.219 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.219 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.219 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.219 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.219 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.219 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.219 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.219 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.219 "name": "Existed_Raid", 00:09:01.219 "uuid": "455c0554-de62-401f-98c0-b4187665d1f1", 00:09:01.219 "strip_size_kb": 64, 00:09:01.219 "state": "configuring", 00:09:01.219 "raid_level": "concat", 00:09:01.219 "superblock": true, 00:09:01.219 "num_base_bdevs": 3, 00:09:01.219 "num_base_bdevs_discovered": 0, 00:09:01.219 "num_base_bdevs_operational": 3, 00:09:01.219 "base_bdevs_list": [ 00:09:01.219 { 00:09:01.219 "name": "BaseBdev1", 00:09:01.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.219 "is_configured": false, 00:09:01.219 "data_offset": 0, 00:09:01.219 "data_size": 0 00:09:01.219 }, 00:09:01.219 { 00:09:01.219 "name": "BaseBdev2", 00:09:01.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.219 "is_configured": false, 00:09:01.219 "data_offset": 0, 00:09:01.219 "data_size": 0 00:09:01.219 }, 00:09:01.219 { 00:09:01.219 "name": "BaseBdev3", 00:09:01.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.219 "is_configured": false, 00:09:01.219 "data_offset": 0, 00:09:01.219 "data_size": 0 00:09:01.219 } 00:09:01.219 ] 00:09:01.219 }' 00:09:01.219 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.219 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.787 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:01.787 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.787 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.787 [2024-11-19 12:29:06.849847] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:01.787 [2024-11-19 12:29:06.849972] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:01.787 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.787 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:01.787 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.787 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.787 [2024-11-19 12:29:06.861866] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:01.787 [2024-11-19 12:29:06.861965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:01.787 [2024-11-19 12:29:06.861992] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:01.787 [2024-11-19 12:29:06.862013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:01.787 [2024-11-19 12:29:06.862031] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:01.787 [2024-11-19 12:29:06.862050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:01.787 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.787 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:01.787 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.787 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.787 [2024-11-19 12:29:06.882515] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:01.787 BaseBdev1 00:09:01.787 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.787 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:01.787 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:01.787 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:01.787 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:01.787 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:01.787 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:01.787 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:01.787 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.787 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.787 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.787 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:01.787 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.787 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.787 [ 00:09:01.787 { 00:09:01.787 "name": "BaseBdev1", 00:09:01.787 "aliases": [ 00:09:01.788 "584207d6-83e5-4d3c-b522-a9b8a0025dfd" 00:09:01.788 ], 00:09:01.788 "product_name": "Malloc disk", 00:09:01.788 "block_size": 512, 00:09:01.788 "num_blocks": 65536, 00:09:01.788 "uuid": "584207d6-83e5-4d3c-b522-a9b8a0025dfd", 00:09:01.788 "assigned_rate_limits": { 00:09:01.788 "rw_ios_per_sec": 0, 00:09:01.788 "rw_mbytes_per_sec": 0, 00:09:01.788 "r_mbytes_per_sec": 0, 00:09:01.788 "w_mbytes_per_sec": 0 00:09:01.788 }, 00:09:01.788 "claimed": true, 00:09:01.788 "claim_type": "exclusive_write", 00:09:01.788 "zoned": false, 00:09:01.788 "supported_io_types": { 00:09:01.788 "read": true, 00:09:01.788 "write": true, 00:09:01.788 "unmap": true, 00:09:01.788 "flush": true, 00:09:01.788 "reset": true, 00:09:01.788 "nvme_admin": false, 00:09:01.788 "nvme_io": false, 00:09:01.788 "nvme_io_md": false, 00:09:01.788 "write_zeroes": true, 00:09:01.788 "zcopy": true, 00:09:01.788 "get_zone_info": false, 00:09:01.788 "zone_management": false, 00:09:01.788 "zone_append": false, 00:09:01.788 "compare": false, 00:09:01.788 "compare_and_write": false, 00:09:01.788 "abort": true, 00:09:01.788 "seek_hole": false, 00:09:01.788 "seek_data": false, 00:09:01.788 "copy": true, 00:09:01.788 "nvme_iov_md": false 00:09:01.788 }, 00:09:01.788 "memory_domains": [ 00:09:01.788 { 00:09:01.788 "dma_device_id": "system", 00:09:01.788 "dma_device_type": 1 00:09:01.788 }, 00:09:01.788 { 00:09:01.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.788 "dma_device_type": 2 00:09:01.788 } 00:09:01.788 ], 00:09:01.788 "driver_specific": {} 00:09:01.788 } 00:09:01.788 ] 00:09:01.788 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.788 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:01.788 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:01.788 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.788 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.788 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.788 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.788 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.788 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.788 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.788 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.788 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.788 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.788 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.788 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.788 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.788 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.788 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.788 "name": "Existed_Raid", 00:09:01.788 "uuid": "5982a5eb-e318-4ba4-9d96-62cc0894dcd0", 00:09:01.788 "strip_size_kb": 64, 00:09:01.788 "state": "configuring", 00:09:01.788 "raid_level": "concat", 00:09:01.788 "superblock": true, 00:09:01.788 "num_base_bdevs": 3, 00:09:01.788 "num_base_bdevs_discovered": 1, 00:09:01.788 "num_base_bdevs_operational": 3, 00:09:01.788 "base_bdevs_list": [ 00:09:01.788 { 00:09:01.788 "name": "BaseBdev1", 00:09:01.788 "uuid": "584207d6-83e5-4d3c-b522-a9b8a0025dfd", 00:09:01.788 "is_configured": true, 00:09:01.788 "data_offset": 2048, 00:09:01.788 "data_size": 63488 00:09:01.788 }, 00:09:01.788 { 00:09:01.788 "name": "BaseBdev2", 00:09:01.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.788 "is_configured": false, 00:09:01.788 "data_offset": 0, 00:09:01.788 "data_size": 0 00:09:01.788 }, 00:09:01.788 { 00:09:01.788 "name": "BaseBdev3", 00:09:01.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.788 "is_configured": false, 00:09:01.788 "data_offset": 0, 00:09:01.788 "data_size": 0 00:09:01.788 } 00:09:01.788 ] 00:09:01.788 }' 00:09:01.788 12:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.788 12:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.356 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:02.357 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.357 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.357 [2024-11-19 12:29:07.357768] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:02.357 [2024-11-19 12:29:07.357894] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:02.357 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.357 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:02.357 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.357 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.357 [2024-11-19 12:29:07.369782] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:02.357 [2024-11-19 12:29:07.371579] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:02.357 [2024-11-19 12:29:07.371626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:02.357 [2024-11-19 12:29:07.371636] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:02.357 [2024-11-19 12:29:07.371646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:02.357 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.357 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:02.357 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:02.357 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:02.357 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.357 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.357 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.357 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.357 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.357 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.357 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.357 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.357 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.357 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.357 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.357 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.357 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.357 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.357 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.357 "name": "Existed_Raid", 00:09:02.357 "uuid": "8ed2fd05-afea-4472-972b-11b4ec17303a", 00:09:02.357 "strip_size_kb": 64, 00:09:02.357 "state": "configuring", 00:09:02.357 "raid_level": "concat", 00:09:02.357 "superblock": true, 00:09:02.357 "num_base_bdevs": 3, 00:09:02.357 "num_base_bdevs_discovered": 1, 00:09:02.357 "num_base_bdevs_operational": 3, 00:09:02.357 "base_bdevs_list": [ 00:09:02.357 { 00:09:02.357 "name": "BaseBdev1", 00:09:02.357 "uuid": "584207d6-83e5-4d3c-b522-a9b8a0025dfd", 00:09:02.357 "is_configured": true, 00:09:02.357 "data_offset": 2048, 00:09:02.357 "data_size": 63488 00:09:02.357 }, 00:09:02.357 { 00:09:02.357 "name": "BaseBdev2", 00:09:02.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.357 "is_configured": false, 00:09:02.357 "data_offset": 0, 00:09:02.357 "data_size": 0 00:09:02.357 }, 00:09:02.357 { 00:09:02.357 "name": "BaseBdev3", 00:09:02.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.357 "is_configured": false, 00:09:02.357 "data_offset": 0, 00:09:02.357 "data_size": 0 00:09:02.357 } 00:09:02.357 ] 00:09:02.357 }' 00:09:02.357 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.357 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.617 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:02.617 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.617 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.617 [2024-11-19 12:29:07.853773] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:02.617 BaseBdev2 00:09:02.617 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.617 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:02.617 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:02.617 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:02.617 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:02.617 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:02.617 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:02.617 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:02.617 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.617 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.617 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.617 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:02.617 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.617 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.876 [ 00:09:02.876 { 00:09:02.876 "name": "BaseBdev2", 00:09:02.876 "aliases": [ 00:09:02.876 "b2079c46-7d6b-4041-9780-bd6d99bdf631" 00:09:02.876 ], 00:09:02.877 "product_name": "Malloc disk", 00:09:02.877 "block_size": 512, 00:09:02.877 "num_blocks": 65536, 00:09:02.877 "uuid": "b2079c46-7d6b-4041-9780-bd6d99bdf631", 00:09:02.877 "assigned_rate_limits": { 00:09:02.877 "rw_ios_per_sec": 0, 00:09:02.877 "rw_mbytes_per_sec": 0, 00:09:02.877 "r_mbytes_per_sec": 0, 00:09:02.877 "w_mbytes_per_sec": 0 00:09:02.877 }, 00:09:02.877 "claimed": true, 00:09:02.877 "claim_type": "exclusive_write", 00:09:02.877 "zoned": false, 00:09:02.877 "supported_io_types": { 00:09:02.877 "read": true, 00:09:02.877 "write": true, 00:09:02.877 "unmap": true, 00:09:02.877 "flush": true, 00:09:02.877 "reset": true, 00:09:02.877 "nvme_admin": false, 00:09:02.877 "nvme_io": false, 00:09:02.877 "nvme_io_md": false, 00:09:02.877 "write_zeroes": true, 00:09:02.877 "zcopy": true, 00:09:02.877 "get_zone_info": false, 00:09:02.877 "zone_management": false, 00:09:02.877 "zone_append": false, 00:09:02.877 "compare": false, 00:09:02.877 "compare_and_write": false, 00:09:02.877 "abort": true, 00:09:02.877 "seek_hole": false, 00:09:02.877 "seek_data": false, 00:09:02.877 "copy": true, 00:09:02.877 "nvme_iov_md": false 00:09:02.877 }, 00:09:02.877 "memory_domains": [ 00:09:02.877 { 00:09:02.877 "dma_device_id": "system", 00:09:02.877 "dma_device_type": 1 00:09:02.877 }, 00:09:02.877 { 00:09:02.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.877 "dma_device_type": 2 00:09:02.877 } 00:09:02.877 ], 00:09:02.877 "driver_specific": {} 00:09:02.877 } 00:09:02.877 ] 00:09:02.877 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.877 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:02.877 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:02.877 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:02.877 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:02.877 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.877 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.877 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.877 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.877 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.877 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.877 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.877 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.877 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.877 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.877 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.877 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.877 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.877 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.877 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.877 "name": "Existed_Raid", 00:09:02.877 "uuid": "8ed2fd05-afea-4472-972b-11b4ec17303a", 00:09:02.877 "strip_size_kb": 64, 00:09:02.877 "state": "configuring", 00:09:02.877 "raid_level": "concat", 00:09:02.877 "superblock": true, 00:09:02.877 "num_base_bdevs": 3, 00:09:02.877 "num_base_bdevs_discovered": 2, 00:09:02.877 "num_base_bdevs_operational": 3, 00:09:02.877 "base_bdevs_list": [ 00:09:02.877 { 00:09:02.877 "name": "BaseBdev1", 00:09:02.877 "uuid": "584207d6-83e5-4d3c-b522-a9b8a0025dfd", 00:09:02.877 "is_configured": true, 00:09:02.877 "data_offset": 2048, 00:09:02.877 "data_size": 63488 00:09:02.877 }, 00:09:02.877 { 00:09:02.877 "name": "BaseBdev2", 00:09:02.877 "uuid": "b2079c46-7d6b-4041-9780-bd6d99bdf631", 00:09:02.877 "is_configured": true, 00:09:02.877 "data_offset": 2048, 00:09:02.877 "data_size": 63488 00:09:02.877 }, 00:09:02.877 { 00:09:02.877 "name": "BaseBdev3", 00:09:02.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.877 "is_configured": false, 00:09:02.877 "data_offset": 0, 00:09:02.877 "data_size": 0 00:09:02.877 } 00:09:02.877 ] 00:09:02.877 }' 00:09:02.877 12:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.877 12:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.137 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:03.137 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.137 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.137 [2024-11-19 12:29:08.327989] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:03.137 BaseBdev3 00:09:03.137 [2024-11-19 12:29:08.328287] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:03.137 [2024-11-19 12:29:08.328320] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:03.137 [2024-11-19 12:29:08.328609] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:03.137 [2024-11-19 12:29:08.328735] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:03.137 [2024-11-19 12:29:08.328770] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:03.137 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.137 [2024-11-19 12:29:08.328892] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.137 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:03.137 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:03.137 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:03.137 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:03.137 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:03.137 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:03.137 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:03.137 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.137 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.137 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.137 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:03.137 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.137 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.137 [ 00:09:03.137 { 00:09:03.137 "name": "BaseBdev3", 00:09:03.137 "aliases": [ 00:09:03.137 "eafd123a-f4e4-42db-9975-905fac71dc79" 00:09:03.137 ], 00:09:03.137 "product_name": "Malloc disk", 00:09:03.137 "block_size": 512, 00:09:03.137 "num_blocks": 65536, 00:09:03.137 "uuid": "eafd123a-f4e4-42db-9975-905fac71dc79", 00:09:03.137 "assigned_rate_limits": { 00:09:03.137 "rw_ios_per_sec": 0, 00:09:03.137 "rw_mbytes_per_sec": 0, 00:09:03.137 "r_mbytes_per_sec": 0, 00:09:03.137 "w_mbytes_per_sec": 0 00:09:03.137 }, 00:09:03.137 "claimed": true, 00:09:03.137 "claim_type": "exclusive_write", 00:09:03.137 "zoned": false, 00:09:03.137 "supported_io_types": { 00:09:03.137 "read": true, 00:09:03.137 "write": true, 00:09:03.137 "unmap": true, 00:09:03.137 "flush": true, 00:09:03.137 "reset": true, 00:09:03.137 "nvme_admin": false, 00:09:03.137 "nvme_io": false, 00:09:03.137 "nvme_io_md": false, 00:09:03.137 "write_zeroes": true, 00:09:03.137 "zcopy": true, 00:09:03.137 "get_zone_info": false, 00:09:03.137 "zone_management": false, 00:09:03.137 "zone_append": false, 00:09:03.137 "compare": false, 00:09:03.137 "compare_and_write": false, 00:09:03.137 "abort": true, 00:09:03.137 "seek_hole": false, 00:09:03.137 "seek_data": false, 00:09:03.137 "copy": true, 00:09:03.137 "nvme_iov_md": false 00:09:03.137 }, 00:09:03.137 "memory_domains": [ 00:09:03.137 { 00:09:03.137 "dma_device_id": "system", 00:09:03.137 "dma_device_type": 1 00:09:03.137 }, 00:09:03.138 { 00:09:03.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.138 "dma_device_type": 2 00:09:03.138 } 00:09:03.138 ], 00:09:03.138 "driver_specific": {} 00:09:03.138 } 00:09:03.138 ] 00:09:03.138 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.138 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:03.138 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:03.138 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:03.138 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:03.138 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.138 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:03.138 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.138 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.138 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.138 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.138 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.138 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.138 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.138 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.138 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.138 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.138 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.138 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.397 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.397 "name": "Existed_Raid", 00:09:03.397 "uuid": "8ed2fd05-afea-4472-972b-11b4ec17303a", 00:09:03.398 "strip_size_kb": 64, 00:09:03.398 "state": "online", 00:09:03.398 "raid_level": "concat", 00:09:03.398 "superblock": true, 00:09:03.398 "num_base_bdevs": 3, 00:09:03.398 "num_base_bdevs_discovered": 3, 00:09:03.398 "num_base_bdevs_operational": 3, 00:09:03.398 "base_bdevs_list": [ 00:09:03.398 { 00:09:03.398 "name": "BaseBdev1", 00:09:03.398 "uuid": "584207d6-83e5-4d3c-b522-a9b8a0025dfd", 00:09:03.398 "is_configured": true, 00:09:03.398 "data_offset": 2048, 00:09:03.398 "data_size": 63488 00:09:03.398 }, 00:09:03.398 { 00:09:03.398 "name": "BaseBdev2", 00:09:03.398 "uuid": "b2079c46-7d6b-4041-9780-bd6d99bdf631", 00:09:03.398 "is_configured": true, 00:09:03.398 "data_offset": 2048, 00:09:03.398 "data_size": 63488 00:09:03.398 }, 00:09:03.398 { 00:09:03.398 "name": "BaseBdev3", 00:09:03.398 "uuid": "eafd123a-f4e4-42db-9975-905fac71dc79", 00:09:03.398 "is_configured": true, 00:09:03.398 "data_offset": 2048, 00:09:03.398 "data_size": 63488 00:09:03.398 } 00:09:03.398 ] 00:09:03.398 }' 00:09:03.398 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.398 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.657 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:03.657 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:03.657 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:03.657 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:03.657 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:03.657 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:03.657 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:03.657 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:03.657 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.657 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.657 [2024-11-19 12:29:08.823520] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:03.657 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.657 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:03.657 "name": "Existed_Raid", 00:09:03.657 "aliases": [ 00:09:03.657 "8ed2fd05-afea-4472-972b-11b4ec17303a" 00:09:03.657 ], 00:09:03.657 "product_name": "Raid Volume", 00:09:03.657 "block_size": 512, 00:09:03.657 "num_blocks": 190464, 00:09:03.657 "uuid": "8ed2fd05-afea-4472-972b-11b4ec17303a", 00:09:03.657 "assigned_rate_limits": { 00:09:03.657 "rw_ios_per_sec": 0, 00:09:03.657 "rw_mbytes_per_sec": 0, 00:09:03.657 "r_mbytes_per_sec": 0, 00:09:03.657 "w_mbytes_per_sec": 0 00:09:03.657 }, 00:09:03.657 "claimed": false, 00:09:03.657 "zoned": false, 00:09:03.657 "supported_io_types": { 00:09:03.657 "read": true, 00:09:03.657 "write": true, 00:09:03.657 "unmap": true, 00:09:03.657 "flush": true, 00:09:03.657 "reset": true, 00:09:03.657 "nvme_admin": false, 00:09:03.657 "nvme_io": false, 00:09:03.657 "nvme_io_md": false, 00:09:03.657 "write_zeroes": true, 00:09:03.657 "zcopy": false, 00:09:03.657 "get_zone_info": false, 00:09:03.657 "zone_management": false, 00:09:03.657 "zone_append": false, 00:09:03.657 "compare": false, 00:09:03.657 "compare_and_write": false, 00:09:03.657 "abort": false, 00:09:03.657 "seek_hole": false, 00:09:03.657 "seek_data": false, 00:09:03.657 "copy": false, 00:09:03.657 "nvme_iov_md": false 00:09:03.657 }, 00:09:03.657 "memory_domains": [ 00:09:03.657 { 00:09:03.657 "dma_device_id": "system", 00:09:03.657 "dma_device_type": 1 00:09:03.657 }, 00:09:03.657 { 00:09:03.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.657 "dma_device_type": 2 00:09:03.657 }, 00:09:03.658 { 00:09:03.658 "dma_device_id": "system", 00:09:03.658 "dma_device_type": 1 00:09:03.658 }, 00:09:03.658 { 00:09:03.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.658 "dma_device_type": 2 00:09:03.658 }, 00:09:03.658 { 00:09:03.658 "dma_device_id": "system", 00:09:03.658 "dma_device_type": 1 00:09:03.658 }, 00:09:03.658 { 00:09:03.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.658 "dma_device_type": 2 00:09:03.658 } 00:09:03.658 ], 00:09:03.658 "driver_specific": { 00:09:03.658 "raid": { 00:09:03.658 "uuid": "8ed2fd05-afea-4472-972b-11b4ec17303a", 00:09:03.658 "strip_size_kb": 64, 00:09:03.658 "state": "online", 00:09:03.658 "raid_level": "concat", 00:09:03.658 "superblock": true, 00:09:03.658 "num_base_bdevs": 3, 00:09:03.658 "num_base_bdevs_discovered": 3, 00:09:03.658 "num_base_bdevs_operational": 3, 00:09:03.658 "base_bdevs_list": [ 00:09:03.658 { 00:09:03.658 "name": "BaseBdev1", 00:09:03.658 "uuid": "584207d6-83e5-4d3c-b522-a9b8a0025dfd", 00:09:03.658 "is_configured": true, 00:09:03.658 "data_offset": 2048, 00:09:03.658 "data_size": 63488 00:09:03.658 }, 00:09:03.658 { 00:09:03.658 "name": "BaseBdev2", 00:09:03.658 "uuid": "b2079c46-7d6b-4041-9780-bd6d99bdf631", 00:09:03.658 "is_configured": true, 00:09:03.658 "data_offset": 2048, 00:09:03.658 "data_size": 63488 00:09:03.658 }, 00:09:03.658 { 00:09:03.658 "name": "BaseBdev3", 00:09:03.658 "uuid": "eafd123a-f4e4-42db-9975-905fac71dc79", 00:09:03.658 "is_configured": true, 00:09:03.658 "data_offset": 2048, 00:09:03.658 "data_size": 63488 00:09:03.658 } 00:09:03.658 ] 00:09:03.658 } 00:09:03.658 } 00:09:03.658 }' 00:09:03.658 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:03.658 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:03.658 BaseBdev2 00:09:03.658 BaseBdev3' 00:09:03.658 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.918 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:03.918 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.918 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:03.918 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.918 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.918 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.918 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.918 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.918 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.918 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.918 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:03.918 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.919 12:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.919 12:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.919 [2024-11-19 12:29:09.102819] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:03.919 [2024-11-19 12:29:09.102854] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:03.919 [2024-11-19 12:29:09.102930] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.919 "name": "Existed_Raid", 00:09:03.919 "uuid": "8ed2fd05-afea-4472-972b-11b4ec17303a", 00:09:03.919 "strip_size_kb": 64, 00:09:03.919 "state": "offline", 00:09:03.919 "raid_level": "concat", 00:09:03.919 "superblock": true, 00:09:03.919 "num_base_bdevs": 3, 00:09:03.919 "num_base_bdevs_discovered": 2, 00:09:03.919 "num_base_bdevs_operational": 2, 00:09:03.919 "base_bdevs_list": [ 00:09:03.919 { 00:09:03.919 "name": null, 00:09:03.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.919 "is_configured": false, 00:09:03.919 "data_offset": 0, 00:09:03.919 "data_size": 63488 00:09:03.919 }, 00:09:03.919 { 00:09:03.919 "name": "BaseBdev2", 00:09:03.919 "uuid": "b2079c46-7d6b-4041-9780-bd6d99bdf631", 00:09:03.919 "is_configured": true, 00:09:03.919 "data_offset": 2048, 00:09:03.919 "data_size": 63488 00:09:03.919 }, 00:09:03.919 { 00:09:03.919 "name": "BaseBdev3", 00:09:03.919 "uuid": "eafd123a-f4e4-42db-9975-905fac71dc79", 00:09:03.919 "is_configured": true, 00:09:03.919 "data_offset": 2048, 00:09:03.919 "data_size": 63488 00:09:03.919 } 00:09:03.919 ] 00:09:03.919 }' 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.919 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.489 [2024-11-19 12:29:09.637228] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.489 [2024-11-19 12:29:09.704439] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:04.489 [2024-11-19 12:29:09.704581] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.489 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.749 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:04.749 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:04.749 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:04.749 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:04.749 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:04.749 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:04.749 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.749 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.749 BaseBdev2 00:09:04.749 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.749 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:04.749 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:04.749 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:04.749 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:04.749 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:04.749 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:04.749 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:04.749 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.749 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.749 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.749 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:04.749 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.749 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.749 [ 00:09:04.749 { 00:09:04.749 "name": "BaseBdev2", 00:09:04.749 "aliases": [ 00:09:04.749 "6fa82432-8054-49e0-90e7-f453c5161c96" 00:09:04.749 ], 00:09:04.749 "product_name": "Malloc disk", 00:09:04.749 "block_size": 512, 00:09:04.749 "num_blocks": 65536, 00:09:04.749 "uuid": "6fa82432-8054-49e0-90e7-f453c5161c96", 00:09:04.749 "assigned_rate_limits": { 00:09:04.749 "rw_ios_per_sec": 0, 00:09:04.749 "rw_mbytes_per_sec": 0, 00:09:04.749 "r_mbytes_per_sec": 0, 00:09:04.749 "w_mbytes_per_sec": 0 00:09:04.749 }, 00:09:04.749 "claimed": false, 00:09:04.749 "zoned": false, 00:09:04.749 "supported_io_types": { 00:09:04.749 "read": true, 00:09:04.749 "write": true, 00:09:04.749 "unmap": true, 00:09:04.749 "flush": true, 00:09:04.749 "reset": true, 00:09:04.749 "nvme_admin": false, 00:09:04.749 "nvme_io": false, 00:09:04.749 "nvme_io_md": false, 00:09:04.749 "write_zeroes": true, 00:09:04.749 "zcopy": true, 00:09:04.749 "get_zone_info": false, 00:09:04.749 "zone_management": false, 00:09:04.749 "zone_append": false, 00:09:04.749 "compare": false, 00:09:04.749 "compare_and_write": false, 00:09:04.749 "abort": true, 00:09:04.749 "seek_hole": false, 00:09:04.749 "seek_data": false, 00:09:04.749 "copy": true, 00:09:04.749 "nvme_iov_md": false 00:09:04.749 }, 00:09:04.749 "memory_domains": [ 00:09:04.749 { 00:09:04.749 "dma_device_id": "system", 00:09:04.750 "dma_device_type": 1 00:09:04.750 }, 00:09:04.750 { 00:09:04.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.750 "dma_device_type": 2 00:09:04.750 } 00:09:04.750 ], 00:09:04.750 "driver_specific": {} 00:09:04.750 } 00:09:04.750 ] 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.750 BaseBdev3 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.750 [ 00:09:04.750 { 00:09:04.750 "name": "BaseBdev3", 00:09:04.750 "aliases": [ 00:09:04.750 "d147f734-ff83-42d5-8d4b-f6050ac1584c" 00:09:04.750 ], 00:09:04.750 "product_name": "Malloc disk", 00:09:04.750 "block_size": 512, 00:09:04.750 "num_blocks": 65536, 00:09:04.750 "uuid": "d147f734-ff83-42d5-8d4b-f6050ac1584c", 00:09:04.750 "assigned_rate_limits": { 00:09:04.750 "rw_ios_per_sec": 0, 00:09:04.750 "rw_mbytes_per_sec": 0, 00:09:04.750 "r_mbytes_per_sec": 0, 00:09:04.750 "w_mbytes_per_sec": 0 00:09:04.750 }, 00:09:04.750 "claimed": false, 00:09:04.750 "zoned": false, 00:09:04.750 "supported_io_types": { 00:09:04.750 "read": true, 00:09:04.750 "write": true, 00:09:04.750 "unmap": true, 00:09:04.750 "flush": true, 00:09:04.750 "reset": true, 00:09:04.750 "nvme_admin": false, 00:09:04.750 "nvme_io": false, 00:09:04.750 "nvme_io_md": false, 00:09:04.750 "write_zeroes": true, 00:09:04.750 "zcopy": true, 00:09:04.750 "get_zone_info": false, 00:09:04.750 "zone_management": false, 00:09:04.750 "zone_append": false, 00:09:04.750 "compare": false, 00:09:04.750 "compare_and_write": false, 00:09:04.750 "abort": true, 00:09:04.750 "seek_hole": false, 00:09:04.750 "seek_data": false, 00:09:04.750 "copy": true, 00:09:04.750 "nvme_iov_md": false 00:09:04.750 }, 00:09:04.750 "memory_domains": [ 00:09:04.750 { 00:09:04.750 "dma_device_id": "system", 00:09:04.750 "dma_device_type": 1 00:09:04.750 }, 00:09:04.750 { 00:09:04.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.750 "dma_device_type": 2 00:09:04.750 } 00:09:04.750 ], 00:09:04.750 "driver_specific": {} 00:09:04.750 } 00:09:04.750 ] 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.750 [2024-11-19 12:29:09.880577] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:04.750 [2024-11-19 12:29:09.880708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:04.750 [2024-11-19 12:29:09.880760] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:04.750 [2024-11-19 12:29:09.882570] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.750 "name": "Existed_Raid", 00:09:04.750 "uuid": "a6c3510d-54a6-4080-84e6-968dcb9917e9", 00:09:04.750 "strip_size_kb": 64, 00:09:04.750 "state": "configuring", 00:09:04.750 "raid_level": "concat", 00:09:04.750 "superblock": true, 00:09:04.750 "num_base_bdevs": 3, 00:09:04.750 "num_base_bdevs_discovered": 2, 00:09:04.750 "num_base_bdevs_operational": 3, 00:09:04.750 "base_bdevs_list": [ 00:09:04.750 { 00:09:04.750 "name": "BaseBdev1", 00:09:04.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.750 "is_configured": false, 00:09:04.750 "data_offset": 0, 00:09:04.750 "data_size": 0 00:09:04.750 }, 00:09:04.750 { 00:09:04.750 "name": "BaseBdev2", 00:09:04.750 "uuid": "6fa82432-8054-49e0-90e7-f453c5161c96", 00:09:04.750 "is_configured": true, 00:09:04.750 "data_offset": 2048, 00:09:04.750 "data_size": 63488 00:09:04.750 }, 00:09:04.750 { 00:09:04.750 "name": "BaseBdev3", 00:09:04.750 "uuid": "d147f734-ff83-42d5-8d4b-f6050ac1584c", 00:09:04.750 "is_configured": true, 00:09:04.750 "data_offset": 2048, 00:09:04.750 "data_size": 63488 00:09:04.750 } 00:09:04.750 ] 00:09:04.750 }' 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.750 12:29:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.319 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:05.319 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.319 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.319 [2024-11-19 12:29:10.359828] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:05.319 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.319 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:05.319 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.319 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.319 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.319 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.319 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.319 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.319 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.319 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.319 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.319 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.319 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.319 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.319 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.319 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.319 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.319 "name": "Existed_Raid", 00:09:05.319 "uuid": "a6c3510d-54a6-4080-84e6-968dcb9917e9", 00:09:05.319 "strip_size_kb": 64, 00:09:05.319 "state": "configuring", 00:09:05.319 "raid_level": "concat", 00:09:05.319 "superblock": true, 00:09:05.319 "num_base_bdevs": 3, 00:09:05.319 "num_base_bdevs_discovered": 1, 00:09:05.319 "num_base_bdevs_operational": 3, 00:09:05.319 "base_bdevs_list": [ 00:09:05.319 { 00:09:05.319 "name": "BaseBdev1", 00:09:05.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.319 "is_configured": false, 00:09:05.319 "data_offset": 0, 00:09:05.319 "data_size": 0 00:09:05.319 }, 00:09:05.319 { 00:09:05.319 "name": null, 00:09:05.319 "uuid": "6fa82432-8054-49e0-90e7-f453c5161c96", 00:09:05.319 "is_configured": false, 00:09:05.319 "data_offset": 0, 00:09:05.319 "data_size": 63488 00:09:05.319 }, 00:09:05.319 { 00:09:05.319 "name": "BaseBdev3", 00:09:05.319 "uuid": "d147f734-ff83-42d5-8d4b-f6050ac1584c", 00:09:05.319 "is_configured": true, 00:09:05.319 "data_offset": 2048, 00:09:05.319 "data_size": 63488 00:09:05.319 } 00:09:05.319 ] 00:09:05.319 }' 00:09:05.319 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.319 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.578 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.578 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:05.578 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.578 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.578 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.836 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:05.836 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:05.836 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.836 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.836 [2024-11-19 12:29:10.873800] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.836 BaseBdev1 00:09:05.836 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.836 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:05.836 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:05.836 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:05.836 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:05.836 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:05.836 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:05.836 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:05.836 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.836 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.836 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.836 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:05.836 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.836 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.836 [ 00:09:05.836 { 00:09:05.836 "name": "BaseBdev1", 00:09:05.836 "aliases": [ 00:09:05.836 "7b2b18d9-c76c-4338-858d-a84faeafef26" 00:09:05.836 ], 00:09:05.836 "product_name": "Malloc disk", 00:09:05.836 "block_size": 512, 00:09:05.836 "num_blocks": 65536, 00:09:05.836 "uuid": "7b2b18d9-c76c-4338-858d-a84faeafef26", 00:09:05.836 "assigned_rate_limits": { 00:09:05.836 "rw_ios_per_sec": 0, 00:09:05.836 "rw_mbytes_per_sec": 0, 00:09:05.836 "r_mbytes_per_sec": 0, 00:09:05.836 "w_mbytes_per_sec": 0 00:09:05.836 }, 00:09:05.836 "claimed": true, 00:09:05.836 "claim_type": "exclusive_write", 00:09:05.836 "zoned": false, 00:09:05.836 "supported_io_types": { 00:09:05.836 "read": true, 00:09:05.836 "write": true, 00:09:05.836 "unmap": true, 00:09:05.836 "flush": true, 00:09:05.836 "reset": true, 00:09:05.836 "nvme_admin": false, 00:09:05.836 "nvme_io": false, 00:09:05.836 "nvme_io_md": false, 00:09:05.836 "write_zeroes": true, 00:09:05.836 "zcopy": true, 00:09:05.836 "get_zone_info": false, 00:09:05.836 "zone_management": false, 00:09:05.836 "zone_append": false, 00:09:05.836 "compare": false, 00:09:05.836 "compare_and_write": false, 00:09:05.836 "abort": true, 00:09:05.836 "seek_hole": false, 00:09:05.836 "seek_data": false, 00:09:05.836 "copy": true, 00:09:05.836 "nvme_iov_md": false 00:09:05.836 }, 00:09:05.836 "memory_domains": [ 00:09:05.836 { 00:09:05.837 "dma_device_id": "system", 00:09:05.837 "dma_device_type": 1 00:09:05.837 }, 00:09:05.837 { 00:09:05.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.837 "dma_device_type": 2 00:09:05.837 } 00:09:05.837 ], 00:09:05.837 "driver_specific": {} 00:09:05.837 } 00:09:05.837 ] 00:09:05.837 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.837 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:05.837 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:05.837 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.837 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.837 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.837 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.837 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.837 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.837 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.837 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.837 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.837 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.837 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.837 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.837 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.837 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.837 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.837 "name": "Existed_Raid", 00:09:05.837 "uuid": "a6c3510d-54a6-4080-84e6-968dcb9917e9", 00:09:05.837 "strip_size_kb": 64, 00:09:05.837 "state": "configuring", 00:09:05.837 "raid_level": "concat", 00:09:05.837 "superblock": true, 00:09:05.837 "num_base_bdevs": 3, 00:09:05.837 "num_base_bdevs_discovered": 2, 00:09:05.837 "num_base_bdevs_operational": 3, 00:09:05.837 "base_bdevs_list": [ 00:09:05.837 { 00:09:05.837 "name": "BaseBdev1", 00:09:05.837 "uuid": "7b2b18d9-c76c-4338-858d-a84faeafef26", 00:09:05.837 "is_configured": true, 00:09:05.837 "data_offset": 2048, 00:09:05.837 "data_size": 63488 00:09:05.837 }, 00:09:05.837 { 00:09:05.837 "name": null, 00:09:05.837 "uuid": "6fa82432-8054-49e0-90e7-f453c5161c96", 00:09:05.837 "is_configured": false, 00:09:05.837 "data_offset": 0, 00:09:05.837 "data_size": 63488 00:09:05.837 }, 00:09:05.837 { 00:09:05.837 "name": "BaseBdev3", 00:09:05.837 "uuid": "d147f734-ff83-42d5-8d4b-f6050ac1584c", 00:09:05.837 "is_configured": true, 00:09:05.837 "data_offset": 2048, 00:09:05.837 "data_size": 63488 00:09:05.837 } 00:09:05.837 ] 00:09:05.837 }' 00:09:05.837 12:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.837 12:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.405 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.405 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.405 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.405 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:06.405 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.405 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:06.405 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:06.405 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.405 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.405 [2024-11-19 12:29:11.416885] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:06.405 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.405 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:06.405 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.405 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.405 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.405 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.405 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.406 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.406 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.406 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.406 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.406 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.406 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.406 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.406 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.406 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.406 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.406 "name": "Existed_Raid", 00:09:06.406 "uuid": "a6c3510d-54a6-4080-84e6-968dcb9917e9", 00:09:06.406 "strip_size_kb": 64, 00:09:06.406 "state": "configuring", 00:09:06.406 "raid_level": "concat", 00:09:06.406 "superblock": true, 00:09:06.406 "num_base_bdevs": 3, 00:09:06.406 "num_base_bdevs_discovered": 1, 00:09:06.406 "num_base_bdevs_operational": 3, 00:09:06.406 "base_bdevs_list": [ 00:09:06.406 { 00:09:06.406 "name": "BaseBdev1", 00:09:06.406 "uuid": "7b2b18d9-c76c-4338-858d-a84faeafef26", 00:09:06.406 "is_configured": true, 00:09:06.406 "data_offset": 2048, 00:09:06.406 "data_size": 63488 00:09:06.406 }, 00:09:06.406 { 00:09:06.406 "name": null, 00:09:06.406 "uuid": "6fa82432-8054-49e0-90e7-f453c5161c96", 00:09:06.406 "is_configured": false, 00:09:06.406 "data_offset": 0, 00:09:06.406 "data_size": 63488 00:09:06.406 }, 00:09:06.406 { 00:09:06.406 "name": null, 00:09:06.406 "uuid": "d147f734-ff83-42d5-8d4b-f6050ac1584c", 00:09:06.406 "is_configured": false, 00:09:06.406 "data_offset": 0, 00:09:06.406 "data_size": 63488 00:09:06.406 } 00:09:06.406 ] 00:09:06.406 }' 00:09:06.406 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.406 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.664 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:06.664 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.664 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.664 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.664 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.924 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:06.924 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:06.924 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.924 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.924 [2024-11-19 12:29:11.932079] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:06.924 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.924 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:06.924 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.924 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.924 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.924 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.924 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.924 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.924 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.924 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.924 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.924 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.924 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.924 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.924 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.924 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.924 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.924 "name": "Existed_Raid", 00:09:06.924 "uuid": "a6c3510d-54a6-4080-84e6-968dcb9917e9", 00:09:06.924 "strip_size_kb": 64, 00:09:06.924 "state": "configuring", 00:09:06.924 "raid_level": "concat", 00:09:06.924 "superblock": true, 00:09:06.924 "num_base_bdevs": 3, 00:09:06.924 "num_base_bdevs_discovered": 2, 00:09:06.924 "num_base_bdevs_operational": 3, 00:09:06.924 "base_bdevs_list": [ 00:09:06.924 { 00:09:06.924 "name": "BaseBdev1", 00:09:06.924 "uuid": "7b2b18d9-c76c-4338-858d-a84faeafef26", 00:09:06.924 "is_configured": true, 00:09:06.924 "data_offset": 2048, 00:09:06.924 "data_size": 63488 00:09:06.924 }, 00:09:06.924 { 00:09:06.924 "name": null, 00:09:06.924 "uuid": "6fa82432-8054-49e0-90e7-f453c5161c96", 00:09:06.924 "is_configured": false, 00:09:06.924 "data_offset": 0, 00:09:06.924 "data_size": 63488 00:09:06.924 }, 00:09:06.924 { 00:09:06.924 "name": "BaseBdev3", 00:09:06.924 "uuid": "d147f734-ff83-42d5-8d4b-f6050ac1584c", 00:09:06.924 "is_configured": true, 00:09:06.924 "data_offset": 2048, 00:09:06.924 "data_size": 63488 00:09:06.924 } 00:09:06.924 ] 00:09:06.924 }' 00:09:06.924 12:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.924 12:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.184 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.184 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.184 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.184 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:07.184 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.444 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:07.444 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:07.444 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.444 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.444 [2024-11-19 12:29:12.451223] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:07.444 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.444 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:07.444 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.444 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.444 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.444 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.444 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.444 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.444 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.444 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.444 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.444 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.444 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.444 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.444 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.444 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.444 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.444 "name": "Existed_Raid", 00:09:07.444 "uuid": "a6c3510d-54a6-4080-84e6-968dcb9917e9", 00:09:07.444 "strip_size_kb": 64, 00:09:07.444 "state": "configuring", 00:09:07.444 "raid_level": "concat", 00:09:07.444 "superblock": true, 00:09:07.444 "num_base_bdevs": 3, 00:09:07.444 "num_base_bdevs_discovered": 1, 00:09:07.444 "num_base_bdevs_operational": 3, 00:09:07.444 "base_bdevs_list": [ 00:09:07.444 { 00:09:07.444 "name": null, 00:09:07.444 "uuid": "7b2b18d9-c76c-4338-858d-a84faeafef26", 00:09:07.444 "is_configured": false, 00:09:07.444 "data_offset": 0, 00:09:07.444 "data_size": 63488 00:09:07.444 }, 00:09:07.444 { 00:09:07.444 "name": null, 00:09:07.444 "uuid": "6fa82432-8054-49e0-90e7-f453c5161c96", 00:09:07.444 "is_configured": false, 00:09:07.444 "data_offset": 0, 00:09:07.444 "data_size": 63488 00:09:07.444 }, 00:09:07.444 { 00:09:07.444 "name": "BaseBdev3", 00:09:07.444 "uuid": "d147f734-ff83-42d5-8d4b-f6050ac1584c", 00:09:07.444 "is_configured": true, 00:09:07.444 "data_offset": 2048, 00:09:07.444 "data_size": 63488 00:09:07.444 } 00:09:07.444 ] 00:09:07.444 }' 00:09:07.444 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.444 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.704 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:07.704 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.704 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.704 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.963 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.963 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:07.963 12:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:07.963 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.963 12:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.963 [2024-11-19 12:29:12.996596] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:07.963 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.963 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:07.963 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.963 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.963 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.963 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.963 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.963 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.963 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.963 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.963 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.963 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.963 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.963 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.963 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.963 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.963 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.963 "name": "Existed_Raid", 00:09:07.963 "uuid": "a6c3510d-54a6-4080-84e6-968dcb9917e9", 00:09:07.963 "strip_size_kb": 64, 00:09:07.963 "state": "configuring", 00:09:07.963 "raid_level": "concat", 00:09:07.963 "superblock": true, 00:09:07.963 "num_base_bdevs": 3, 00:09:07.963 "num_base_bdevs_discovered": 2, 00:09:07.963 "num_base_bdevs_operational": 3, 00:09:07.963 "base_bdevs_list": [ 00:09:07.963 { 00:09:07.963 "name": null, 00:09:07.963 "uuid": "7b2b18d9-c76c-4338-858d-a84faeafef26", 00:09:07.963 "is_configured": false, 00:09:07.963 "data_offset": 0, 00:09:07.964 "data_size": 63488 00:09:07.964 }, 00:09:07.964 { 00:09:07.964 "name": "BaseBdev2", 00:09:07.964 "uuid": "6fa82432-8054-49e0-90e7-f453c5161c96", 00:09:07.964 "is_configured": true, 00:09:07.964 "data_offset": 2048, 00:09:07.964 "data_size": 63488 00:09:07.964 }, 00:09:07.964 { 00:09:07.964 "name": "BaseBdev3", 00:09:07.964 "uuid": "d147f734-ff83-42d5-8d4b-f6050ac1584c", 00:09:07.964 "is_configured": true, 00:09:07.964 "data_offset": 2048, 00:09:07.964 "data_size": 63488 00:09:07.964 } 00:09:07.964 ] 00:09:07.964 }' 00:09:07.964 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.964 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.223 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.223 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:08.223 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.223 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.223 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.223 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:08.223 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.223 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.223 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.223 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:08.482 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.482 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7b2b18d9-c76c-4338-858d-a84faeafef26 00:09:08.482 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.482 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.482 [2024-11-19 12:29:13.530782] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:08.482 [2024-11-19 12:29:13.531064] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:08.482 [2024-11-19 12:29:13.531120] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:08.482 [2024-11-19 12:29:13.531407] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:08.482 NewBaseBdev 00:09:08.482 [2024-11-19 12:29:13.531559] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:08.482 [2024-11-19 12:29:13.531610] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:08.482 [2024-11-19 12:29:13.531764] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.482 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.482 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:08.482 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:08.482 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:08.482 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:08.482 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:08.482 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:08.482 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:08.482 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.482 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.482 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.482 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:08.482 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.482 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.482 [ 00:09:08.482 { 00:09:08.482 "name": "NewBaseBdev", 00:09:08.482 "aliases": [ 00:09:08.482 "7b2b18d9-c76c-4338-858d-a84faeafef26" 00:09:08.482 ], 00:09:08.482 "product_name": "Malloc disk", 00:09:08.482 "block_size": 512, 00:09:08.482 "num_blocks": 65536, 00:09:08.482 "uuid": "7b2b18d9-c76c-4338-858d-a84faeafef26", 00:09:08.482 "assigned_rate_limits": { 00:09:08.482 "rw_ios_per_sec": 0, 00:09:08.482 "rw_mbytes_per_sec": 0, 00:09:08.482 "r_mbytes_per_sec": 0, 00:09:08.482 "w_mbytes_per_sec": 0 00:09:08.482 }, 00:09:08.482 "claimed": true, 00:09:08.482 "claim_type": "exclusive_write", 00:09:08.482 "zoned": false, 00:09:08.482 "supported_io_types": { 00:09:08.482 "read": true, 00:09:08.482 "write": true, 00:09:08.482 "unmap": true, 00:09:08.482 "flush": true, 00:09:08.482 "reset": true, 00:09:08.482 "nvme_admin": false, 00:09:08.482 "nvme_io": false, 00:09:08.482 "nvme_io_md": false, 00:09:08.482 "write_zeroes": true, 00:09:08.482 "zcopy": true, 00:09:08.482 "get_zone_info": false, 00:09:08.482 "zone_management": false, 00:09:08.482 "zone_append": false, 00:09:08.482 "compare": false, 00:09:08.482 "compare_and_write": false, 00:09:08.482 "abort": true, 00:09:08.482 "seek_hole": false, 00:09:08.482 "seek_data": false, 00:09:08.482 "copy": true, 00:09:08.482 "nvme_iov_md": false 00:09:08.482 }, 00:09:08.482 "memory_domains": [ 00:09:08.482 { 00:09:08.482 "dma_device_id": "system", 00:09:08.482 "dma_device_type": 1 00:09:08.482 }, 00:09:08.482 { 00:09:08.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.482 "dma_device_type": 2 00:09:08.482 } 00:09:08.482 ], 00:09:08.482 "driver_specific": {} 00:09:08.482 } 00:09:08.482 ] 00:09:08.482 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.482 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:08.482 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:08.482 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.482 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:08.482 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.482 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.482 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.482 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.482 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.482 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.483 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.483 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.483 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.483 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.483 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.483 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.483 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.483 "name": "Existed_Raid", 00:09:08.483 "uuid": "a6c3510d-54a6-4080-84e6-968dcb9917e9", 00:09:08.483 "strip_size_kb": 64, 00:09:08.483 "state": "online", 00:09:08.483 "raid_level": "concat", 00:09:08.483 "superblock": true, 00:09:08.483 "num_base_bdevs": 3, 00:09:08.483 "num_base_bdevs_discovered": 3, 00:09:08.483 "num_base_bdevs_operational": 3, 00:09:08.483 "base_bdevs_list": [ 00:09:08.483 { 00:09:08.483 "name": "NewBaseBdev", 00:09:08.483 "uuid": "7b2b18d9-c76c-4338-858d-a84faeafef26", 00:09:08.483 "is_configured": true, 00:09:08.483 "data_offset": 2048, 00:09:08.483 "data_size": 63488 00:09:08.483 }, 00:09:08.483 { 00:09:08.483 "name": "BaseBdev2", 00:09:08.483 "uuid": "6fa82432-8054-49e0-90e7-f453c5161c96", 00:09:08.483 "is_configured": true, 00:09:08.483 "data_offset": 2048, 00:09:08.483 "data_size": 63488 00:09:08.483 }, 00:09:08.483 { 00:09:08.483 "name": "BaseBdev3", 00:09:08.483 "uuid": "d147f734-ff83-42d5-8d4b-f6050ac1584c", 00:09:08.483 "is_configured": true, 00:09:08.483 "data_offset": 2048, 00:09:08.483 "data_size": 63488 00:09:08.483 } 00:09:08.483 ] 00:09:08.483 }' 00:09:08.483 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.483 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.742 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:08.742 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:08.742 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:08.742 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:08.742 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:08.742 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:08.742 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:08.742 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:08.742 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.742 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.742 [2024-11-19 12:29:13.922409] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:08.742 12:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.742 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:08.742 "name": "Existed_Raid", 00:09:08.742 "aliases": [ 00:09:08.742 "a6c3510d-54a6-4080-84e6-968dcb9917e9" 00:09:08.742 ], 00:09:08.742 "product_name": "Raid Volume", 00:09:08.742 "block_size": 512, 00:09:08.742 "num_blocks": 190464, 00:09:08.742 "uuid": "a6c3510d-54a6-4080-84e6-968dcb9917e9", 00:09:08.742 "assigned_rate_limits": { 00:09:08.742 "rw_ios_per_sec": 0, 00:09:08.742 "rw_mbytes_per_sec": 0, 00:09:08.742 "r_mbytes_per_sec": 0, 00:09:08.742 "w_mbytes_per_sec": 0 00:09:08.742 }, 00:09:08.742 "claimed": false, 00:09:08.742 "zoned": false, 00:09:08.742 "supported_io_types": { 00:09:08.742 "read": true, 00:09:08.742 "write": true, 00:09:08.742 "unmap": true, 00:09:08.742 "flush": true, 00:09:08.742 "reset": true, 00:09:08.742 "nvme_admin": false, 00:09:08.742 "nvme_io": false, 00:09:08.742 "nvme_io_md": false, 00:09:08.742 "write_zeroes": true, 00:09:08.742 "zcopy": false, 00:09:08.742 "get_zone_info": false, 00:09:08.742 "zone_management": false, 00:09:08.742 "zone_append": false, 00:09:08.742 "compare": false, 00:09:08.742 "compare_and_write": false, 00:09:08.742 "abort": false, 00:09:08.742 "seek_hole": false, 00:09:08.742 "seek_data": false, 00:09:08.742 "copy": false, 00:09:08.742 "nvme_iov_md": false 00:09:08.742 }, 00:09:08.742 "memory_domains": [ 00:09:08.742 { 00:09:08.742 "dma_device_id": "system", 00:09:08.742 "dma_device_type": 1 00:09:08.742 }, 00:09:08.742 { 00:09:08.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.742 "dma_device_type": 2 00:09:08.742 }, 00:09:08.742 { 00:09:08.742 "dma_device_id": "system", 00:09:08.742 "dma_device_type": 1 00:09:08.742 }, 00:09:08.742 { 00:09:08.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.742 "dma_device_type": 2 00:09:08.742 }, 00:09:08.742 { 00:09:08.742 "dma_device_id": "system", 00:09:08.742 "dma_device_type": 1 00:09:08.742 }, 00:09:08.742 { 00:09:08.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.742 "dma_device_type": 2 00:09:08.742 } 00:09:08.742 ], 00:09:08.742 "driver_specific": { 00:09:08.742 "raid": { 00:09:08.742 "uuid": "a6c3510d-54a6-4080-84e6-968dcb9917e9", 00:09:08.742 "strip_size_kb": 64, 00:09:08.742 "state": "online", 00:09:08.742 "raid_level": "concat", 00:09:08.742 "superblock": true, 00:09:08.742 "num_base_bdevs": 3, 00:09:08.742 "num_base_bdevs_discovered": 3, 00:09:08.742 "num_base_bdevs_operational": 3, 00:09:08.742 "base_bdevs_list": [ 00:09:08.742 { 00:09:08.742 "name": "NewBaseBdev", 00:09:08.742 "uuid": "7b2b18d9-c76c-4338-858d-a84faeafef26", 00:09:08.742 "is_configured": true, 00:09:08.742 "data_offset": 2048, 00:09:08.742 "data_size": 63488 00:09:08.742 }, 00:09:08.742 { 00:09:08.742 "name": "BaseBdev2", 00:09:08.742 "uuid": "6fa82432-8054-49e0-90e7-f453c5161c96", 00:09:08.742 "is_configured": true, 00:09:08.742 "data_offset": 2048, 00:09:08.742 "data_size": 63488 00:09:08.742 }, 00:09:08.742 { 00:09:08.742 "name": "BaseBdev3", 00:09:08.742 "uuid": "d147f734-ff83-42d5-8d4b-f6050ac1584c", 00:09:08.742 "is_configured": true, 00:09:08.742 "data_offset": 2048, 00:09:08.742 "data_size": 63488 00:09:08.742 } 00:09:08.742 ] 00:09:08.742 } 00:09:08.742 } 00:09:08.742 }' 00:09:08.742 12:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:09.002 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:09.002 BaseBdev2 00:09:09.002 BaseBdev3' 00:09:09.002 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.002 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.003 [2024-11-19 12:29:14.201639] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:09.003 [2024-11-19 12:29:14.201720] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:09.003 [2024-11-19 12:29:14.201831] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:09.003 [2024-11-19 12:29:14.201889] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:09.003 [2024-11-19 12:29:14.201910] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77521 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 77521 ']' 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 77521 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77521 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77521' 00:09:09.003 killing process with pid 77521 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 77521 00:09:09.003 [2024-11-19 12:29:14.253695] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:09.003 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 77521 00:09:09.262 [2024-11-19 12:29:14.284541] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:09.520 12:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:09.520 00:09:09.520 real 0m9.051s 00:09:09.520 user 0m15.368s 00:09:09.520 sys 0m1.900s 00:09:09.520 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:09.520 12:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.520 ************************************ 00:09:09.520 END TEST raid_state_function_test_sb 00:09:09.520 ************************************ 00:09:09.520 12:29:14 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:09.520 12:29:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:09.520 12:29:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.520 12:29:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:09.520 ************************************ 00:09:09.520 START TEST raid_superblock_test 00:09:09.520 ************************************ 00:09:09.521 12:29:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:09:09.521 12:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:09.521 12:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:09.521 12:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:09.521 12:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:09.521 12:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:09.521 12:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:09.521 12:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:09.521 12:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:09.521 12:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:09.521 12:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:09.521 12:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:09.521 12:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:09.521 12:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:09.521 12:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:09.521 12:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:09.521 12:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:09.521 12:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=78130 00:09:09.521 12:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:09.521 12:29:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 78130 00:09:09.521 12:29:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 78130 ']' 00:09:09.521 12:29:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.521 12:29:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:09.521 12:29:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.521 12:29:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:09.521 12:29:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.521 [2024-11-19 12:29:14.699977] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:09.521 [2024-11-19 12:29:14.700157] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78130 ] 00:09:09.780 [2024-11-19 12:29:14.867073] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.780 [2024-11-19 12:29:14.916902] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.780 [2024-11-19 12:29:14.959268] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.780 [2024-11-19 12:29:14.959318] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.350 malloc1 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.350 [2024-11-19 12:29:15.553482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:10.350 [2024-11-19 12:29:15.553579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.350 [2024-11-19 12:29:15.553600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:10.350 [2024-11-19 12:29:15.553621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.350 [2024-11-19 12:29:15.555850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.350 [2024-11-19 12:29:15.555894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:10.350 pt1 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.350 malloc2 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.350 [2024-11-19 12:29:15.600200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:10.350 [2024-11-19 12:29:15.600376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.350 [2024-11-19 12:29:15.600433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:10.350 [2024-11-19 12:29:15.600490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.350 [2024-11-19 12:29:15.602846] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.350 [2024-11-19 12:29:15.602924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:10.350 pt2 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:10.350 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:10.611 12:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.611 12:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.611 malloc3 00:09:10.611 12:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.611 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:10.611 12:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.611 12:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.611 [2024-11-19 12:29:15.632993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:10.611 [2024-11-19 12:29:15.633109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.611 [2024-11-19 12:29:15.633145] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:10.611 [2024-11-19 12:29:15.633174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.611 [2024-11-19 12:29:15.635222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.611 [2024-11-19 12:29:15.635297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:10.611 pt3 00:09:10.611 12:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.611 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:10.611 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:10.611 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:10.611 12:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.611 12:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.611 [2024-11-19 12:29:15.645021] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:10.611 [2024-11-19 12:29:15.646850] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:10.611 [2024-11-19 12:29:15.646916] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:10.611 [2024-11-19 12:29:15.647066] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:10.611 [2024-11-19 12:29:15.647077] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:10.611 [2024-11-19 12:29:15.647339] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:10.611 [2024-11-19 12:29:15.647482] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:10.611 [2024-11-19 12:29:15.647497] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:09:10.611 [2024-11-19 12:29:15.647619] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.611 12:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.611 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:10.611 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:10.611 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.611 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.611 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.611 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.611 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.611 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.611 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.611 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.611 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.611 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:10.611 12:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.611 12:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.611 12:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.611 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.611 "name": "raid_bdev1", 00:09:10.611 "uuid": "c37820af-27ae-4eba-a6e9-c6485adcdd75", 00:09:10.611 "strip_size_kb": 64, 00:09:10.611 "state": "online", 00:09:10.611 "raid_level": "concat", 00:09:10.611 "superblock": true, 00:09:10.611 "num_base_bdevs": 3, 00:09:10.611 "num_base_bdevs_discovered": 3, 00:09:10.611 "num_base_bdevs_operational": 3, 00:09:10.611 "base_bdevs_list": [ 00:09:10.611 { 00:09:10.611 "name": "pt1", 00:09:10.611 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:10.611 "is_configured": true, 00:09:10.611 "data_offset": 2048, 00:09:10.611 "data_size": 63488 00:09:10.611 }, 00:09:10.611 { 00:09:10.611 "name": "pt2", 00:09:10.611 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:10.611 "is_configured": true, 00:09:10.611 "data_offset": 2048, 00:09:10.611 "data_size": 63488 00:09:10.611 }, 00:09:10.611 { 00:09:10.611 "name": "pt3", 00:09:10.611 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:10.611 "is_configured": true, 00:09:10.611 "data_offset": 2048, 00:09:10.611 "data_size": 63488 00:09:10.611 } 00:09:10.611 ] 00:09:10.611 }' 00:09:10.611 12:29:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.611 12:29:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.871 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:10.871 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:10.871 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:10.871 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:10.871 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:10.871 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:10.871 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:10.871 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.871 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.871 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:10.871 [2024-11-19 12:29:16.088576] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:10.871 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.871 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:10.871 "name": "raid_bdev1", 00:09:10.871 "aliases": [ 00:09:10.871 "c37820af-27ae-4eba-a6e9-c6485adcdd75" 00:09:10.871 ], 00:09:10.871 "product_name": "Raid Volume", 00:09:10.871 "block_size": 512, 00:09:10.871 "num_blocks": 190464, 00:09:10.871 "uuid": "c37820af-27ae-4eba-a6e9-c6485adcdd75", 00:09:10.871 "assigned_rate_limits": { 00:09:10.871 "rw_ios_per_sec": 0, 00:09:10.871 "rw_mbytes_per_sec": 0, 00:09:10.871 "r_mbytes_per_sec": 0, 00:09:10.871 "w_mbytes_per_sec": 0 00:09:10.871 }, 00:09:10.871 "claimed": false, 00:09:10.871 "zoned": false, 00:09:10.871 "supported_io_types": { 00:09:10.871 "read": true, 00:09:10.871 "write": true, 00:09:10.871 "unmap": true, 00:09:10.871 "flush": true, 00:09:10.871 "reset": true, 00:09:10.871 "nvme_admin": false, 00:09:10.871 "nvme_io": false, 00:09:10.871 "nvme_io_md": false, 00:09:10.871 "write_zeroes": true, 00:09:10.871 "zcopy": false, 00:09:10.871 "get_zone_info": false, 00:09:10.871 "zone_management": false, 00:09:10.871 "zone_append": false, 00:09:10.871 "compare": false, 00:09:10.871 "compare_and_write": false, 00:09:10.871 "abort": false, 00:09:10.871 "seek_hole": false, 00:09:10.871 "seek_data": false, 00:09:10.871 "copy": false, 00:09:10.871 "nvme_iov_md": false 00:09:10.871 }, 00:09:10.871 "memory_domains": [ 00:09:10.871 { 00:09:10.871 "dma_device_id": "system", 00:09:10.871 "dma_device_type": 1 00:09:10.871 }, 00:09:10.871 { 00:09:10.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.871 "dma_device_type": 2 00:09:10.871 }, 00:09:10.871 { 00:09:10.871 "dma_device_id": "system", 00:09:10.871 "dma_device_type": 1 00:09:10.871 }, 00:09:10.871 { 00:09:10.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.871 "dma_device_type": 2 00:09:10.871 }, 00:09:10.871 { 00:09:10.871 "dma_device_id": "system", 00:09:10.871 "dma_device_type": 1 00:09:10.871 }, 00:09:10.871 { 00:09:10.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.871 "dma_device_type": 2 00:09:10.871 } 00:09:10.871 ], 00:09:10.871 "driver_specific": { 00:09:10.871 "raid": { 00:09:10.871 "uuid": "c37820af-27ae-4eba-a6e9-c6485adcdd75", 00:09:10.871 "strip_size_kb": 64, 00:09:10.871 "state": "online", 00:09:10.871 "raid_level": "concat", 00:09:10.871 "superblock": true, 00:09:10.871 "num_base_bdevs": 3, 00:09:10.871 "num_base_bdevs_discovered": 3, 00:09:10.871 "num_base_bdevs_operational": 3, 00:09:10.871 "base_bdevs_list": [ 00:09:10.871 { 00:09:10.871 "name": "pt1", 00:09:10.871 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:10.871 "is_configured": true, 00:09:10.871 "data_offset": 2048, 00:09:10.871 "data_size": 63488 00:09:10.871 }, 00:09:10.871 { 00:09:10.871 "name": "pt2", 00:09:10.872 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:10.872 "is_configured": true, 00:09:10.872 "data_offset": 2048, 00:09:10.872 "data_size": 63488 00:09:10.872 }, 00:09:10.872 { 00:09:10.872 "name": "pt3", 00:09:10.872 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:10.872 "is_configured": true, 00:09:10.872 "data_offset": 2048, 00:09:10.872 "data_size": 63488 00:09:10.872 } 00:09:10.872 ] 00:09:10.872 } 00:09:10.872 } 00:09:10.872 }' 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:11.131 pt2 00:09:11.131 pt3' 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.131 [2024-11-19 12:29:16.364131] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c37820af-27ae-4eba-a6e9-c6485adcdd75 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c37820af-27ae-4eba-a6e9-c6485adcdd75 ']' 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:11.131 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.411 [2024-11-19 12:29:16.395818] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:11.411 [2024-11-19 12:29:16.395904] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:11.411 [2024-11-19 12:29:16.396003] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.411 [2024-11-19 12:29:16.396081] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:11.411 [2024-11-19 12:29:16.396136] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.411 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.411 [2024-11-19 12:29:16.551551] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:11.411 [2024-11-19 12:29:16.553429] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:11.411 [2024-11-19 12:29:16.553516] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:11.411 [2024-11-19 12:29:16.553584] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:11.411 [2024-11-19 12:29:16.553659] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:11.411 [2024-11-19 12:29:16.553725] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:11.411 [2024-11-19 12:29:16.553810] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:11.411 [2024-11-19 12:29:16.553846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:09:11.411 request: 00:09:11.412 { 00:09:11.412 "name": "raid_bdev1", 00:09:11.412 "raid_level": "concat", 00:09:11.412 "base_bdevs": [ 00:09:11.412 "malloc1", 00:09:11.412 "malloc2", 00:09:11.412 "malloc3" 00:09:11.412 ], 00:09:11.412 "strip_size_kb": 64, 00:09:11.412 "superblock": false, 00:09:11.412 "method": "bdev_raid_create", 00:09:11.412 "req_id": 1 00:09:11.412 } 00:09:11.412 Got JSON-RPC error response 00:09:11.412 response: 00:09:11.412 { 00:09:11.412 "code": -17, 00:09:11.412 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:11.412 } 00:09:11.412 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:11.412 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:11.412 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:11.412 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:11.412 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:11.412 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.412 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.412 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.412 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:11.412 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.412 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:11.412 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:11.412 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:11.412 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.412 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.412 [2024-11-19 12:29:16.619468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:11.412 [2024-11-19 12:29:16.619547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:11.412 [2024-11-19 12:29:16.619569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:11.412 [2024-11-19 12:29:16.619581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:11.412 [2024-11-19 12:29:16.621902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:11.412 [2024-11-19 12:29:16.621941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:11.412 [2024-11-19 12:29:16.622031] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:11.412 [2024-11-19 12:29:16.622088] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:11.412 pt1 00:09:11.412 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.412 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:11.412 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:11.412 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.412 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.412 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.412 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.412 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.412 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.412 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.412 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.412 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.412 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:11.412 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.412 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.412 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.719 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.719 "name": "raid_bdev1", 00:09:11.719 "uuid": "c37820af-27ae-4eba-a6e9-c6485adcdd75", 00:09:11.719 "strip_size_kb": 64, 00:09:11.719 "state": "configuring", 00:09:11.719 "raid_level": "concat", 00:09:11.719 "superblock": true, 00:09:11.719 "num_base_bdevs": 3, 00:09:11.719 "num_base_bdevs_discovered": 1, 00:09:11.719 "num_base_bdevs_operational": 3, 00:09:11.719 "base_bdevs_list": [ 00:09:11.719 { 00:09:11.719 "name": "pt1", 00:09:11.719 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:11.719 "is_configured": true, 00:09:11.719 "data_offset": 2048, 00:09:11.719 "data_size": 63488 00:09:11.719 }, 00:09:11.719 { 00:09:11.719 "name": null, 00:09:11.719 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:11.719 "is_configured": false, 00:09:11.719 "data_offset": 2048, 00:09:11.719 "data_size": 63488 00:09:11.719 }, 00:09:11.719 { 00:09:11.719 "name": null, 00:09:11.719 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:11.719 "is_configured": false, 00:09:11.719 "data_offset": 2048, 00:09:11.719 "data_size": 63488 00:09:11.719 } 00:09:11.719 ] 00:09:11.719 }' 00:09:11.719 12:29:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.719 12:29:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.977 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:11.977 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:11.977 12:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.977 12:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.977 [2024-11-19 12:29:17.034834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:11.977 [2024-11-19 12:29:17.034929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:11.977 [2024-11-19 12:29:17.034954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:11.977 [2024-11-19 12:29:17.034969] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:11.977 [2024-11-19 12:29:17.035472] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:11.977 [2024-11-19 12:29:17.035495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:11.977 [2024-11-19 12:29:17.035592] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:11.977 [2024-11-19 12:29:17.035620] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:11.977 pt2 00:09:11.977 12:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.977 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:11.977 12:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.977 12:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.977 [2024-11-19 12:29:17.046858] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:11.977 12:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.977 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:11.977 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:11.977 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.977 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.977 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.977 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.977 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.977 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.977 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.977 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.977 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.977 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:11.977 12:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.977 12:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.977 12:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.977 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.977 "name": "raid_bdev1", 00:09:11.977 "uuid": "c37820af-27ae-4eba-a6e9-c6485adcdd75", 00:09:11.977 "strip_size_kb": 64, 00:09:11.977 "state": "configuring", 00:09:11.977 "raid_level": "concat", 00:09:11.977 "superblock": true, 00:09:11.977 "num_base_bdevs": 3, 00:09:11.977 "num_base_bdevs_discovered": 1, 00:09:11.977 "num_base_bdevs_operational": 3, 00:09:11.977 "base_bdevs_list": [ 00:09:11.977 { 00:09:11.977 "name": "pt1", 00:09:11.977 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:11.978 "is_configured": true, 00:09:11.978 "data_offset": 2048, 00:09:11.978 "data_size": 63488 00:09:11.978 }, 00:09:11.978 { 00:09:11.978 "name": null, 00:09:11.978 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:11.978 "is_configured": false, 00:09:11.978 "data_offset": 0, 00:09:11.978 "data_size": 63488 00:09:11.978 }, 00:09:11.978 { 00:09:11.978 "name": null, 00:09:11.978 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:11.978 "is_configured": false, 00:09:11.978 "data_offset": 2048, 00:09:11.978 "data_size": 63488 00:09:11.978 } 00:09:11.978 ] 00:09:11.978 }' 00:09:11.978 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.978 12:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.237 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:12.237 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:12.237 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:12.237 12:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.237 12:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.237 [2024-11-19 12:29:17.490021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:12.237 [2024-11-19 12:29:17.490171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.237 [2024-11-19 12:29:17.490212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:12.237 [2024-11-19 12:29:17.490241] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.237 [2024-11-19 12:29:17.490787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.237 [2024-11-19 12:29:17.490845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:12.237 [2024-11-19 12:29:17.490970] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:12.237 [2024-11-19 12:29:17.491024] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:12.237 pt2 00:09:12.237 12:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.237 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:12.237 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:12.497 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:12.497 12:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.497 12:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.497 [2024-11-19 12:29:17.501910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:12.497 [2024-11-19 12:29:17.501985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.497 [2024-11-19 12:29:17.502018] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:12.497 [2024-11-19 12:29:17.502044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.497 [2024-11-19 12:29:17.502433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.497 [2024-11-19 12:29:17.502484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:12.497 [2024-11-19 12:29:17.502568] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:12.497 [2024-11-19 12:29:17.502613] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:12.497 [2024-11-19 12:29:17.502740] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:12.497 [2024-11-19 12:29:17.502789] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:12.497 [2024-11-19 12:29:17.503052] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:12.497 [2024-11-19 12:29:17.503194] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:12.497 [2024-11-19 12:29:17.503235] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:12.497 [2024-11-19 12:29:17.503375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.497 pt3 00:09:12.497 12:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.497 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:12.497 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:12.497 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:12.497 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:12.497 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:12.497 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.497 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.497 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.497 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.497 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.497 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.497 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.497 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.497 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:12.497 12:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.497 12:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.497 12:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.497 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.497 "name": "raid_bdev1", 00:09:12.497 "uuid": "c37820af-27ae-4eba-a6e9-c6485adcdd75", 00:09:12.497 "strip_size_kb": 64, 00:09:12.497 "state": "online", 00:09:12.497 "raid_level": "concat", 00:09:12.497 "superblock": true, 00:09:12.497 "num_base_bdevs": 3, 00:09:12.497 "num_base_bdevs_discovered": 3, 00:09:12.497 "num_base_bdevs_operational": 3, 00:09:12.497 "base_bdevs_list": [ 00:09:12.497 { 00:09:12.497 "name": "pt1", 00:09:12.498 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:12.498 "is_configured": true, 00:09:12.498 "data_offset": 2048, 00:09:12.498 "data_size": 63488 00:09:12.498 }, 00:09:12.498 { 00:09:12.498 "name": "pt2", 00:09:12.498 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:12.498 "is_configured": true, 00:09:12.498 "data_offset": 2048, 00:09:12.498 "data_size": 63488 00:09:12.498 }, 00:09:12.498 { 00:09:12.498 "name": "pt3", 00:09:12.498 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:12.498 "is_configured": true, 00:09:12.498 "data_offset": 2048, 00:09:12.498 "data_size": 63488 00:09:12.498 } 00:09:12.498 ] 00:09:12.498 }' 00:09:12.498 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.498 12:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.758 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:12.758 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:12.758 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:12.758 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:12.758 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:12.758 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:12.758 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:12.758 12:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.758 12:29:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:12.758 12:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.758 [2024-11-19 12:29:17.973531] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:12.758 12:29:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.758 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:12.758 "name": "raid_bdev1", 00:09:12.758 "aliases": [ 00:09:12.758 "c37820af-27ae-4eba-a6e9-c6485adcdd75" 00:09:12.758 ], 00:09:12.758 "product_name": "Raid Volume", 00:09:12.758 "block_size": 512, 00:09:12.758 "num_blocks": 190464, 00:09:12.758 "uuid": "c37820af-27ae-4eba-a6e9-c6485adcdd75", 00:09:12.758 "assigned_rate_limits": { 00:09:12.758 "rw_ios_per_sec": 0, 00:09:12.758 "rw_mbytes_per_sec": 0, 00:09:12.758 "r_mbytes_per_sec": 0, 00:09:12.758 "w_mbytes_per_sec": 0 00:09:12.758 }, 00:09:12.758 "claimed": false, 00:09:12.758 "zoned": false, 00:09:12.758 "supported_io_types": { 00:09:12.758 "read": true, 00:09:12.758 "write": true, 00:09:12.758 "unmap": true, 00:09:12.758 "flush": true, 00:09:12.758 "reset": true, 00:09:12.758 "nvme_admin": false, 00:09:12.758 "nvme_io": false, 00:09:12.758 "nvme_io_md": false, 00:09:12.758 "write_zeroes": true, 00:09:12.758 "zcopy": false, 00:09:12.758 "get_zone_info": false, 00:09:12.758 "zone_management": false, 00:09:12.758 "zone_append": false, 00:09:12.758 "compare": false, 00:09:12.758 "compare_and_write": false, 00:09:12.758 "abort": false, 00:09:12.758 "seek_hole": false, 00:09:12.758 "seek_data": false, 00:09:12.758 "copy": false, 00:09:12.758 "nvme_iov_md": false 00:09:12.758 }, 00:09:12.758 "memory_domains": [ 00:09:12.758 { 00:09:12.758 "dma_device_id": "system", 00:09:12.758 "dma_device_type": 1 00:09:12.758 }, 00:09:12.758 { 00:09:12.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.758 "dma_device_type": 2 00:09:12.758 }, 00:09:12.758 { 00:09:12.758 "dma_device_id": "system", 00:09:12.758 "dma_device_type": 1 00:09:12.758 }, 00:09:12.758 { 00:09:12.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.758 "dma_device_type": 2 00:09:12.758 }, 00:09:12.758 { 00:09:12.758 "dma_device_id": "system", 00:09:12.758 "dma_device_type": 1 00:09:12.758 }, 00:09:12.758 { 00:09:12.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.758 "dma_device_type": 2 00:09:12.758 } 00:09:12.758 ], 00:09:12.758 "driver_specific": { 00:09:12.758 "raid": { 00:09:12.758 "uuid": "c37820af-27ae-4eba-a6e9-c6485adcdd75", 00:09:12.758 "strip_size_kb": 64, 00:09:12.758 "state": "online", 00:09:12.758 "raid_level": "concat", 00:09:12.758 "superblock": true, 00:09:12.758 "num_base_bdevs": 3, 00:09:12.758 "num_base_bdevs_discovered": 3, 00:09:12.758 "num_base_bdevs_operational": 3, 00:09:12.758 "base_bdevs_list": [ 00:09:12.758 { 00:09:12.758 "name": "pt1", 00:09:12.758 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:12.758 "is_configured": true, 00:09:12.758 "data_offset": 2048, 00:09:12.758 "data_size": 63488 00:09:12.758 }, 00:09:12.758 { 00:09:12.758 "name": "pt2", 00:09:12.758 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:12.758 "is_configured": true, 00:09:12.758 "data_offset": 2048, 00:09:12.758 "data_size": 63488 00:09:12.758 }, 00:09:12.758 { 00:09:12.758 "name": "pt3", 00:09:12.758 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:12.758 "is_configured": true, 00:09:12.758 "data_offset": 2048, 00:09:12.758 "data_size": 63488 00:09:12.758 } 00:09:12.758 ] 00:09:12.758 } 00:09:12.758 } 00:09:12.758 }' 00:09:12.758 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:13.018 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:13.018 pt2 00:09:13.018 pt3' 00:09:13.018 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.018 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:13.018 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.018 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:13.018 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.018 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.019 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.019 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.019 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.019 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.019 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.019 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:13.019 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.019 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.019 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.019 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.019 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.019 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.019 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.019 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.019 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:13.019 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.019 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.019 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.019 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.019 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.019 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:13.019 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.019 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.019 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:13.019 [2024-11-19 12:29:18.253080] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:13.019 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.279 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c37820af-27ae-4eba-a6e9-c6485adcdd75 '!=' c37820af-27ae-4eba-a6e9-c6485adcdd75 ']' 00:09:13.279 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:13.279 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:13.279 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:13.279 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 78130 00:09:13.279 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 78130 ']' 00:09:13.279 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 78130 00:09:13.279 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:13.279 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:13.279 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78130 00:09:13.279 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:13.279 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:13.279 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78130' 00:09:13.279 killing process with pid 78130 00:09:13.279 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 78130 00:09:13.279 [2024-11-19 12:29:18.339021] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:13.279 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 78130 00:09:13.279 [2024-11-19 12:29:18.339199] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:13.279 [2024-11-19 12:29:18.339285] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:13.279 [2024-11-19 12:29:18.339349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:13.279 [2024-11-19 12:29:18.400183] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:13.538 12:29:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:13.538 00:09:13.538 real 0m4.180s 00:09:13.538 user 0m6.425s 00:09:13.538 sys 0m0.940s 00:09:13.538 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.538 12:29:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.538 ************************************ 00:09:13.538 END TEST raid_superblock_test 00:09:13.538 ************************************ 00:09:13.798 12:29:18 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:13.798 12:29:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:13.798 12:29:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.798 12:29:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:13.798 ************************************ 00:09:13.798 START TEST raid_read_error_test 00:09:13.798 ************************************ 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.26FdHqi607 00:09:13.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78372 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78372 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 78372 ']' 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:13.798 12:29:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.798 [2024-11-19 12:29:18.963793] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:13.799 [2024-11-19 12:29:18.964018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78372 ] 00:09:14.058 [2024-11-19 12:29:19.126499] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.058 [2024-11-19 12:29:19.174079] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.058 [2024-11-19 12:29:19.216670] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.058 [2024-11-19 12:29:19.216713] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.627 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:14.627 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:14.627 12:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:14.627 12:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:14.627 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.627 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.627 BaseBdev1_malloc 00:09:14.627 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.627 12:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:14.627 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.627 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.627 true 00:09:14.627 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.627 12:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:14.627 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.627 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.627 [2024-11-19 12:29:19.847454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:14.627 [2024-11-19 12:29:19.847533] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.627 [2024-11-19 12:29:19.847558] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:14.627 [2024-11-19 12:29:19.847567] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.627 [2024-11-19 12:29:19.849763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.627 [2024-11-19 12:29:19.849801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:14.627 BaseBdev1 00:09:14.627 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.627 12:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:14.627 12:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:14.627 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.627 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.627 BaseBdev2_malloc 00:09:14.627 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.627 12:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:14.627 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.627 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.888 true 00:09:14.888 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.888 12:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:14.888 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.888 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.888 [2024-11-19 12:29:19.895382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:14.888 [2024-11-19 12:29:19.895507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.888 [2024-11-19 12:29:19.895531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:14.888 [2024-11-19 12:29:19.895540] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.888 [2024-11-19 12:29:19.897663] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.888 [2024-11-19 12:29:19.897694] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:14.888 BaseBdev2 00:09:14.888 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.888 12:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:14.888 12:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:14.888 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.888 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.888 BaseBdev3_malloc 00:09:14.888 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.888 12:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:14.888 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.888 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.888 true 00:09:14.888 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.888 12:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:14.888 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.888 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.888 [2024-11-19 12:29:19.935863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:14.888 [2024-11-19 12:29:19.935913] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.888 [2024-11-19 12:29:19.935932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:14.888 [2024-11-19 12:29:19.935942] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.888 [2024-11-19 12:29:19.937974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.888 [2024-11-19 12:29:19.938074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:14.888 BaseBdev3 00:09:14.888 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.888 12:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:14.888 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.888 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.888 [2024-11-19 12:29:19.947905] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.888 [2024-11-19 12:29:19.949712] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:14.888 [2024-11-19 12:29:19.949841] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:14.888 [2024-11-19 12:29:19.950042] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:14.888 [2024-11-19 12:29:19.950098] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:14.888 [2024-11-19 12:29:19.950353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:14.888 [2024-11-19 12:29:19.950509] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:14.888 [2024-11-19 12:29:19.950550] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:14.888 [2024-11-19 12:29:19.950744] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.888 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.888 12:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:14.888 12:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:14.888 12:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.888 12:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.889 12:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.889 12:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.889 12:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.889 12:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.889 12:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.889 12:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.889 12:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.889 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.889 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.889 12:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:14.889 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.889 12:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.889 "name": "raid_bdev1", 00:09:14.889 "uuid": "abbb6124-b687-4fe2-bb63-d024a1e04d70", 00:09:14.889 "strip_size_kb": 64, 00:09:14.889 "state": "online", 00:09:14.889 "raid_level": "concat", 00:09:14.889 "superblock": true, 00:09:14.889 "num_base_bdevs": 3, 00:09:14.889 "num_base_bdevs_discovered": 3, 00:09:14.889 "num_base_bdevs_operational": 3, 00:09:14.889 "base_bdevs_list": [ 00:09:14.889 { 00:09:14.889 "name": "BaseBdev1", 00:09:14.889 "uuid": "1e156da3-9096-57b3-a9a4-71fa188ffb09", 00:09:14.889 "is_configured": true, 00:09:14.889 "data_offset": 2048, 00:09:14.889 "data_size": 63488 00:09:14.889 }, 00:09:14.889 { 00:09:14.889 "name": "BaseBdev2", 00:09:14.889 "uuid": "91a0ec6c-374f-59ad-bbe0-c073943a143a", 00:09:14.889 "is_configured": true, 00:09:14.889 "data_offset": 2048, 00:09:14.889 "data_size": 63488 00:09:14.889 }, 00:09:14.889 { 00:09:14.889 "name": "BaseBdev3", 00:09:14.889 "uuid": "b6f0a079-2da8-5fbf-ae9c-f926457aa989", 00:09:14.889 "is_configured": true, 00:09:14.889 "data_offset": 2048, 00:09:14.889 "data_size": 63488 00:09:14.889 } 00:09:14.889 ] 00:09:14.889 }' 00:09:14.889 12:29:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.889 12:29:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.149 12:29:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:15.149 12:29:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:15.149 [2024-11-19 12:29:20.387539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:16.087 12:29:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:16.087 12:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.087 12:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.087 12:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.087 12:29:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:16.087 12:29:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:16.087 12:29:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:16.087 12:29:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:16.087 12:29:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:16.087 12:29:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:16.087 12:29:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:16.087 12:29:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.087 12:29:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.087 12:29:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.087 12:29:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.087 12:29:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.087 12:29:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.087 12:29:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.087 12:29:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:16.087 12:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.087 12:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.087 12:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.346 12:29:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.346 "name": "raid_bdev1", 00:09:16.346 "uuid": "abbb6124-b687-4fe2-bb63-d024a1e04d70", 00:09:16.346 "strip_size_kb": 64, 00:09:16.346 "state": "online", 00:09:16.346 "raid_level": "concat", 00:09:16.346 "superblock": true, 00:09:16.346 "num_base_bdevs": 3, 00:09:16.346 "num_base_bdevs_discovered": 3, 00:09:16.346 "num_base_bdevs_operational": 3, 00:09:16.346 "base_bdevs_list": [ 00:09:16.346 { 00:09:16.346 "name": "BaseBdev1", 00:09:16.346 "uuid": "1e156da3-9096-57b3-a9a4-71fa188ffb09", 00:09:16.346 "is_configured": true, 00:09:16.346 "data_offset": 2048, 00:09:16.346 "data_size": 63488 00:09:16.346 }, 00:09:16.346 { 00:09:16.346 "name": "BaseBdev2", 00:09:16.346 "uuid": "91a0ec6c-374f-59ad-bbe0-c073943a143a", 00:09:16.346 "is_configured": true, 00:09:16.346 "data_offset": 2048, 00:09:16.346 "data_size": 63488 00:09:16.346 }, 00:09:16.346 { 00:09:16.346 "name": "BaseBdev3", 00:09:16.346 "uuid": "b6f0a079-2da8-5fbf-ae9c-f926457aa989", 00:09:16.346 "is_configured": true, 00:09:16.346 "data_offset": 2048, 00:09:16.346 "data_size": 63488 00:09:16.346 } 00:09:16.346 ] 00:09:16.346 }' 00:09:16.346 12:29:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.346 12:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.605 12:29:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:16.605 12:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.605 12:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.605 [2024-11-19 12:29:21.779270] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:16.605 [2024-11-19 12:29:21.779409] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.605 [2024-11-19 12:29:21.781892] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.605 [2024-11-19 12:29:21.781986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:16.605 [2024-11-19 12:29:21.782043] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:16.605 [2024-11-19 12:29:21.782101] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:16.605 { 00:09:16.605 "results": [ 00:09:16.605 { 00:09:16.605 "job": "raid_bdev1", 00:09:16.605 "core_mask": "0x1", 00:09:16.605 "workload": "randrw", 00:09:16.605 "percentage": 50, 00:09:16.605 "status": "finished", 00:09:16.605 "queue_depth": 1, 00:09:16.605 "io_size": 131072, 00:09:16.605 "runtime": 1.392581, 00:09:16.605 "iops": 16888.066116082297, 00:09:16.605 "mibps": 2111.008264510287, 00:09:16.605 "io_failed": 1, 00:09:16.605 "io_timeout": 0, 00:09:16.605 "avg_latency_us": 82.16970814825734, 00:09:16.605 "min_latency_us": 24.370305676855896, 00:09:16.605 "max_latency_us": 1366.5257641921398 00:09:16.605 } 00:09:16.605 ], 00:09:16.605 "core_count": 1 00:09:16.605 } 00:09:16.605 12:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.605 12:29:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78372 00:09:16.605 12:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 78372 ']' 00:09:16.605 12:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 78372 00:09:16.605 12:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:16.605 12:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:16.605 12:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78372 00:09:16.605 12:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:16.605 12:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:16.605 12:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78372' 00:09:16.605 killing process with pid 78372 00:09:16.605 12:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 78372 00:09:16.605 [2024-11-19 12:29:21.832692] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:16.605 12:29:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 78372 00:09:16.605 [2024-11-19 12:29:21.857350] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:16.863 12:29:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:16.863 12:29:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.26FdHqi607 00:09:16.863 12:29:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:16.863 12:29:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:16.864 12:29:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:16.864 12:29:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:16.864 12:29:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:16.864 ************************************ 00:09:16.864 END TEST raid_read_error_test 00:09:16.864 ************************************ 00:09:16.864 12:29:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:16.864 00:09:16.864 real 0m3.245s 00:09:16.864 user 0m4.042s 00:09:16.864 sys 0m0.546s 00:09:16.864 12:29:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:16.864 12:29:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.122 12:29:22 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:17.122 12:29:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:17.122 12:29:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:17.122 12:29:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:17.122 ************************************ 00:09:17.122 START TEST raid_write_error_test 00:09:17.122 ************************************ 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ufQOwqaIxd 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78501 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78501 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 78501 ']' 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:17.122 12:29:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.122 [2024-11-19 12:29:22.273909] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:17.122 [2024-11-19 12:29:22.274117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78501 ] 00:09:17.380 [2024-11-19 12:29:22.435079] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.380 [2024-11-19 12:29:22.482115] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.380 [2024-11-19 12:29:22.524324] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.380 [2024-11-19 12:29:22.524369] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.947 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:17.947 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:17.947 12:29:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:17.947 12:29:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:17.947 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.947 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.947 BaseBdev1_malloc 00:09:17.947 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.947 12:29:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:17.947 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.947 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.947 true 00:09:17.947 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.947 12:29:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:17.947 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.947 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.947 [2024-11-19 12:29:23.150937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:17.947 [2024-11-19 12:29:23.151077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.947 [2024-11-19 12:29:23.151108] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:17.947 [2024-11-19 12:29:23.151117] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.947 [2024-11-19 12:29:23.153342] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.947 [2024-11-19 12:29:23.153378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:17.947 BaseBdev1 00:09:17.947 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.947 12:29:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:17.947 12:29:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:17.947 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.947 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.947 BaseBdev2_malloc 00:09:17.948 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.948 12:29:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:17.948 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.948 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.948 true 00:09:17.948 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.948 12:29:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:17.948 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.948 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.948 [2024-11-19 12:29:23.202916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:17.948 [2024-11-19 12:29:23.203074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.948 [2024-11-19 12:29:23.203103] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:17.948 [2024-11-19 12:29:23.203113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.948 [2024-11-19 12:29:23.205261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.948 [2024-11-19 12:29:23.205341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:18.233 BaseBdev2 00:09:18.233 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.233 12:29:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:18.233 12:29:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:18.233 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.233 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.233 BaseBdev3_malloc 00:09:18.233 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.233 12:29:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:18.233 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.233 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.233 true 00:09:18.233 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.233 12:29:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:18.233 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.233 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.233 [2024-11-19 12:29:23.243778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:18.233 [2024-11-19 12:29:23.243932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.233 [2024-11-19 12:29:23.243959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:18.233 [2024-11-19 12:29:23.243969] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.233 [2024-11-19 12:29:23.246057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.233 [2024-11-19 12:29:23.246092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:18.233 BaseBdev3 00:09:18.233 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.233 12:29:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:18.233 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.233 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.233 [2024-11-19 12:29:23.255811] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:18.233 [2024-11-19 12:29:23.257621] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:18.233 [2024-11-19 12:29:23.257704] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:18.233 [2024-11-19 12:29:23.257892] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:18.233 [2024-11-19 12:29:23.257913] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:18.233 [2024-11-19 12:29:23.258195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:18.233 [2024-11-19 12:29:23.258345] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:18.233 [2024-11-19 12:29:23.258356] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:18.233 [2024-11-19 12:29:23.258491] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:18.233 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.233 12:29:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:18.233 12:29:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.233 12:29:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.233 12:29:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.233 12:29:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.233 12:29:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.233 12:29:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.233 12:29:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.233 12:29:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.233 12:29:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.233 12:29:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.233 12:29:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.233 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.233 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.234 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.234 12:29:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.234 "name": "raid_bdev1", 00:09:18.234 "uuid": "549f464c-b10e-4452-ab65-08e113b1b947", 00:09:18.234 "strip_size_kb": 64, 00:09:18.234 "state": "online", 00:09:18.234 "raid_level": "concat", 00:09:18.234 "superblock": true, 00:09:18.234 "num_base_bdevs": 3, 00:09:18.234 "num_base_bdevs_discovered": 3, 00:09:18.234 "num_base_bdevs_operational": 3, 00:09:18.234 "base_bdevs_list": [ 00:09:18.234 { 00:09:18.234 "name": "BaseBdev1", 00:09:18.234 "uuid": "fbe4404c-d9de-5eea-a664-668dbaefbc41", 00:09:18.234 "is_configured": true, 00:09:18.234 "data_offset": 2048, 00:09:18.234 "data_size": 63488 00:09:18.234 }, 00:09:18.234 { 00:09:18.234 "name": "BaseBdev2", 00:09:18.234 "uuid": "5c6bdd2f-372c-527b-ae4a-104cc56dc9c0", 00:09:18.234 "is_configured": true, 00:09:18.234 "data_offset": 2048, 00:09:18.234 "data_size": 63488 00:09:18.234 }, 00:09:18.234 { 00:09:18.234 "name": "BaseBdev3", 00:09:18.234 "uuid": "2ae40159-aacd-569e-85c2-f3a0ec062e72", 00:09:18.234 "is_configured": true, 00:09:18.234 "data_offset": 2048, 00:09:18.234 "data_size": 63488 00:09:18.234 } 00:09:18.234 ] 00:09:18.234 }' 00:09:18.234 12:29:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.234 12:29:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.492 12:29:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:18.492 12:29:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:18.492 [2024-11-19 12:29:23.731334] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:19.429 12:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:19.429 12:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.429 12:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.429 12:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.429 12:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:19.429 12:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:19.429 12:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:19.429 12:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:19.429 12:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.429 12:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.429 12:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.429 12:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.429 12:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.429 12:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.429 12:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.429 12:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.429 12:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.429 12:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.429 12:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.429 12:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.429 12:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.688 12:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.688 12:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.688 "name": "raid_bdev1", 00:09:19.688 "uuid": "549f464c-b10e-4452-ab65-08e113b1b947", 00:09:19.688 "strip_size_kb": 64, 00:09:19.688 "state": "online", 00:09:19.688 "raid_level": "concat", 00:09:19.688 "superblock": true, 00:09:19.688 "num_base_bdevs": 3, 00:09:19.688 "num_base_bdevs_discovered": 3, 00:09:19.688 "num_base_bdevs_operational": 3, 00:09:19.688 "base_bdevs_list": [ 00:09:19.688 { 00:09:19.688 "name": "BaseBdev1", 00:09:19.688 "uuid": "fbe4404c-d9de-5eea-a664-668dbaefbc41", 00:09:19.688 "is_configured": true, 00:09:19.688 "data_offset": 2048, 00:09:19.688 "data_size": 63488 00:09:19.688 }, 00:09:19.688 { 00:09:19.688 "name": "BaseBdev2", 00:09:19.688 "uuid": "5c6bdd2f-372c-527b-ae4a-104cc56dc9c0", 00:09:19.688 "is_configured": true, 00:09:19.688 "data_offset": 2048, 00:09:19.688 "data_size": 63488 00:09:19.688 }, 00:09:19.688 { 00:09:19.688 "name": "BaseBdev3", 00:09:19.688 "uuid": "2ae40159-aacd-569e-85c2-f3a0ec062e72", 00:09:19.688 "is_configured": true, 00:09:19.688 "data_offset": 2048, 00:09:19.688 "data_size": 63488 00:09:19.688 } 00:09:19.688 ] 00:09:19.688 }' 00:09:19.688 12:29:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.688 12:29:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.947 12:29:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:19.947 12:29:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.947 12:29:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.947 [2024-11-19 12:29:25.082672] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:19.947 [2024-11-19 12:29:25.082724] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:19.947 [2024-11-19 12:29:25.085304] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.947 [2024-11-19 12:29:25.085463] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.947 [2024-11-19 12:29:25.085506] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:19.947 [2024-11-19 12:29:25.085518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:19.947 { 00:09:19.947 "results": [ 00:09:19.947 { 00:09:19.947 "job": "raid_bdev1", 00:09:19.947 "core_mask": "0x1", 00:09:19.947 "workload": "randrw", 00:09:19.947 "percentage": 50, 00:09:19.947 "status": "finished", 00:09:19.947 "queue_depth": 1, 00:09:19.947 "io_size": 131072, 00:09:19.947 "runtime": 1.35209, 00:09:19.947 "iops": 16946.357121197554, 00:09:19.947 "mibps": 2118.2946401496943, 00:09:19.947 "io_failed": 1, 00:09:19.947 "io_timeout": 0, 00:09:19.947 "avg_latency_us": 81.82325940206269, 00:09:19.947 "min_latency_us": 25.041048034934498, 00:09:19.947 "max_latency_us": 1438.071615720524 00:09:19.947 } 00:09:19.947 ], 00:09:19.947 "core_count": 1 00:09:19.947 } 00:09:19.947 12:29:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.947 12:29:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78501 00:09:19.947 12:29:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 78501 ']' 00:09:19.947 12:29:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 78501 00:09:19.947 12:29:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:19.947 12:29:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:19.947 12:29:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78501 00:09:19.947 12:29:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:19.947 12:29:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:19.947 12:29:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78501' 00:09:19.947 killing process with pid 78501 00:09:19.947 12:29:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 78501 00:09:19.947 [2024-11-19 12:29:25.134373] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:19.947 12:29:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 78501 00:09:19.947 [2024-11-19 12:29:25.159471] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:20.206 12:29:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ufQOwqaIxd 00:09:20.206 12:29:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:20.206 12:29:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:20.206 12:29:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:20.206 12:29:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:20.206 12:29:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:20.206 12:29:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:20.206 ************************************ 00:09:20.206 END TEST raid_write_error_test 00:09:20.206 ************************************ 00:09:20.206 12:29:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:20.206 00:09:20.206 real 0m3.234s 00:09:20.206 user 0m4.009s 00:09:20.206 sys 0m0.544s 00:09:20.206 12:29:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.206 12:29:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.206 12:29:25 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:20.206 12:29:25 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:20.206 12:29:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:20.206 12:29:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.206 12:29:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:20.465 ************************************ 00:09:20.465 START TEST raid_state_function_test 00:09:20.465 ************************************ 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:20.465 Process raid pid: 78628 00:09:20.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78628 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78628' 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78628 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 78628 ']' 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.465 12:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:20.465 [2024-11-19 12:29:25.561787] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:20.465 [2024-11-19 12:29:25.561947] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.724 [2024-11-19 12:29:25.724003] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.724 [2024-11-19 12:29:25.775269] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.724 [2024-11-19 12:29:25.817355] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:20.724 [2024-11-19 12:29:25.817397] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.293 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:21.293 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:21.293 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:21.293 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.293 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.293 [2024-11-19 12:29:26.414288] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:21.293 [2024-11-19 12:29:26.414440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:21.293 [2024-11-19 12:29:26.414481] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:21.293 [2024-11-19 12:29:26.414492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:21.293 [2024-11-19 12:29:26.414498] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:21.293 [2024-11-19 12:29:26.414511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:21.293 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.293 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:21.293 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.293 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.293 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:21.293 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:21.293 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.293 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.293 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.293 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.293 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.293 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.293 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.293 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.293 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.293 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.293 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.293 "name": "Existed_Raid", 00:09:21.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.293 "strip_size_kb": 0, 00:09:21.293 "state": "configuring", 00:09:21.293 "raid_level": "raid1", 00:09:21.293 "superblock": false, 00:09:21.293 "num_base_bdevs": 3, 00:09:21.293 "num_base_bdevs_discovered": 0, 00:09:21.293 "num_base_bdevs_operational": 3, 00:09:21.293 "base_bdevs_list": [ 00:09:21.293 { 00:09:21.293 "name": "BaseBdev1", 00:09:21.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.293 "is_configured": false, 00:09:21.293 "data_offset": 0, 00:09:21.293 "data_size": 0 00:09:21.293 }, 00:09:21.293 { 00:09:21.293 "name": "BaseBdev2", 00:09:21.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.293 "is_configured": false, 00:09:21.293 "data_offset": 0, 00:09:21.293 "data_size": 0 00:09:21.293 }, 00:09:21.293 { 00:09:21.293 "name": "BaseBdev3", 00:09:21.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.293 "is_configured": false, 00:09:21.293 "data_offset": 0, 00:09:21.293 "data_size": 0 00:09:21.293 } 00:09:21.293 ] 00:09:21.293 }' 00:09:21.293 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.293 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.862 [2024-11-19 12:29:26.837506] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:21.862 [2024-11-19 12:29:26.837648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.862 [2024-11-19 12:29:26.849485] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:21.862 [2024-11-19 12:29:26.849570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:21.862 [2024-11-19 12:29:26.849612] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:21.862 [2024-11-19 12:29:26.849635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:21.862 [2024-11-19 12:29:26.849653] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:21.862 [2024-11-19 12:29:26.849673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.862 [2024-11-19 12:29:26.870129] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:21.862 BaseBdev1 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.862 [ 00:09:21.862 { 00:09:21.862 "name": "BaseBdev1", 00:09:21.862 "aliases": [ 00:09:21.862 "34db4804-877a-4f61-8a50-9a30457acaaf" 00:09:21.862 ], 00:09:21.862 "product_name": "Malloc disk", 00:09:21.862 "block_size": 512, 00:09:21.862 "num_blocks": 65536, 00:09:21.862 "uuid": "34db4804-877a-4f61-8a50-9a30457acaaf", 00:09:21.862 "assigned_rate_limits": { 00:09:21.862 "rw_ios_per_sec": 0, 00:09:21.862 "rw_mbytes_per_sec": 0, 00:09:21.862 "r_mbytes_per_sec": 0, 00:09:21.862 "w_mbytes_per_sec": 0 00:09:21.862 }, 00:09:21.862 "claimed": true, 00:09:21.862 "claim_type": "exclusive_write", 00:09:21.862 "zoned": false, 00:09:21.862 "supported_io_types": { 00:09:21.862 "read": true, 00:09:21.862 "write": true, 00:09:21.862 "unmap": true, 00:09:21.862 "flush": true, 00:09:21.862 "reset": true, 00:09:21.862 "nvme_admin": false, 00:09:21.862 "nvme_io": false, 00:09:21.862 "nvme_io_md": false, 00:09:21.862 "write_zeroes": true, 00:09:21.862 "zcopy": true, 00:09:21.862 "get_zone_info": false, 00:09:21.862 "zone_management": false, 00:09:21.862 "zone_append": false, 00:09:21.862 "compare": false, 00:09:21.862 "compare_and_write": false, 00:09:21.862 "abort": true, 00:09:21.862 "seek_hole": false, 00:09:21.862 "seek_data": false, 00:09:21.862 "copy": true, 00:09:21.862 "nvme_iov_md": false 00:09:21.862 }, 00:09:21.862 "memory_domains": [ 00:09:21.862 { 00:09:21.862 "dma_device_id": "system", 00:09:21.862 "dma_device_type": 1 00:09:21.862 }, 00:09:21.862 { 00:09:21.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.862 "dma_device_type": 2 00:09:21.862 } 00:09:21.862 ], 00:09:21.862 "driver_specific": {} 00:09:21.862 } 00:09:21.862 ] 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.862 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.862 "name": "Existed_Raid", 00:09:21.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.862 "strip_size_kb": 0, 00:09:21.862 "state": "configuring", 00:09:21.862 "raid_level": "raid1", 00:09:21.862 "superblock": false, 00:09:21.862 "num_base_bdevs": 3, 00:09:21.862 "num_base_bdevs_discovered": 1, 00:09:21.862 "num_base_bdevs_operational": 3, 00:09:21.862 "base_bdevs_list": [ 00:09:21.862 { 00:09:21.862 "name": "BaseBdev1", 00:09:21.862 "uuid": "34db4804-877a-4f61-8a50-9a30457acaaf", 00:09:21.862 "is_configured": true, 00:09:21.862 "data_offset": 0, 00:09:21.862 "data_size": 65536 00:09:21.862 }, 00:09:21.862 { 00:09:21.862 "name": "BaseBdev2", 00:09:21.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.863 "is_configured": false, 00:09:21.863 "data_offset": 0, 00:09:21.863 "data_size": 0 00:09:21.863 }, 00:09:21.863 { 00:09:21.863 "name": "BaseBdev3", 00:09:21.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.863 "is_configured": false, 00:09:21.863 "data_offset": 0, 00:09:21.863 "data_size": 0 00:09:21.863 } 00:09:21.863 ] 00:09:21.863 }' 00:09:21.863 12:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.863 12:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.123 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:22.123 12:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.123 12:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.123 [2024-11-19 12:29:27.353385] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:22.123 [2024-11-19 12:29:27.353461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:22.123 12:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.123 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:22.123 12:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.123 12:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.123 [2024-11-19 12:29:27.365426] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.123 [2024-11-19 12:29:27.367299] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:22.123 [2024-11-19 12:29:27.367346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:22.123 [2024-11-19 12:29:27.367356] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:22.123 [2024-11-19 12:29:27.367367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:22.123 12:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.123 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:22.123 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:22.123 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:22.123 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.123 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.123 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.123 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.123 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.123 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.123 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.123 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.123 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.123 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.123 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.123 12:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.123 12:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.383 12:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.383 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.383 "name": "Existed_Raid", 00:09:22.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.383 "strip_size_kb": 0, 00:09:22.383 "state": "configuring", 00:09:22.383 "raid_level": "raid1", 00:09:22.383 "superblock": false, 00:09:22.383 "num_base_bdevs": 3, 00:09:22.383 "num_base_bdevs_discovered": 1, 00:09:22.383 "num_base_bdevs_operational": 3, 00:09:22.383 "base_bdevs_list": [ 00:09:22.383 { 00:09:22.383 "name": "BaseBdev1", 00:09:22.383 "uuid": "34db4804-877a-4f61-8a50-9a30457acaaf", 00:09:22.383 "is_configured": true, 00:09:22.383 "data_offset": 0, 00:09:22.383 "data_size": 65536 00:09:22.383 }, 00:09:22.383 { 00:09:22.383 "name": "BaseBdev2", 00:09:22.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.383 "is_configured": false, 00:09:22.383 "data_offset": 0, 00:09:22.383 "data_size": 0 00:09:22.383 }, 00:09:22.383 { 00:09:22.383 "name": "BaseBdev3", 00:09:22.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.383 "is_configured": false, 00:09:22.383 "data_offset": 0, 00:09:22.383 "data_size": 0 00:09:22.383 } 00:09:22.383 ] 00:09:22.383 }' 00:09:22.383 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.383 12:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.641 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:22.641 12:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.641 12:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.641 [2024-11-19 12:29:27.823189] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:22.641 BaseBdev2 00:09:22.641 12:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.642 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:22.642 12:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:22.642 12:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:22.642 12:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:22.642 12:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:22.642 12:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:22.642 12:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:22.642 12:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.642 12:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.642 12:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.642 12:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:22.642 12:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.642 12:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.642 [ 00:09:22.642 { 00:09:22.642 "name": "BaseBdev2", 00:09:22.642 "aliases": [ 00:09:22.642 "54518159-1b6e-4241-b130-9aad7b307b36" 00:09:22.642 ], 00:09:22.642 "product_name": "Malloc disk", 00:09:22.642 "block_size": 512, 00:09:22.642 "num_blocks": 65536, 00:09:22.642 "uuid": "54518159-1b6e-4241-b130-9aad7b307b36", 00:09:22.642 "assigned_rate_limits": { 00:09:22.642 "rw_ios_per_sec": 0, 00:09:22.642 "rw_mbytes_per_sec": 0, 00:09:22.642 "r_mbytes_per_sec": 0, 00:09:22.642 "w_mbytes_per_sec": 0 00:09:22.642 }, 00:09:22.642 "claimed": true, 00:09:22.642 "claim_type": "exclusive_write", 00:09:22.642 "zoned": false, 00:09:22.642 "supported_io_types": { 00:09:22.642 "read": true, 00:09:22.642 "write": true, 00:09:22.642 "unmap": true, 00:09:22.642 "flush": true, 00:09:22.642 "reset": true, 00:09:22.642 "nvme_admin": false, 00:09:22.642 "nvme_io": false, 00:09:22.642 "nvme_io_md": false, 00:09:22.642 "write_zeroes": true, 00:09:22.642 "zcopy": true, 00:09:22.642 "get_zone_info": false, 00:09:22.642 "zone_management": false, 00:09:22.642 "zone_append": false, 00:09:22.642 "compare": false, 00:09:22.642 "compare_and_write": false, 00:09:22.642 "abort": true, 00:09:22.642 "seek_hole": false, 00:09:22.642 "seek_data": false, 00:09:22.642 "copy": true, 00:09:22.642 "nvme_iov_md": false 00:09:22.642 }, 00:09:22.642 "memory_domains": [ 00:09:22.642 { 00:09:22.642 "dma_device_id": "system", 00:09:22.642 "dma_device_type": 1 00:09:22.642 }, 00:09:22.642 { 00:09:22.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.642 "dma_device_type": 2 00:09:22.642 } 00:09:22.642 ], 00:09:22.642 "driver_specific": {} 00:09:22.642 } 00:09:22.642 ] 00:09:22.642 12:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.642 12:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:22.642 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:22.642 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:22.642 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:22.642 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.642 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.642 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.642 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.642 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.642 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.642 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.642 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.642 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.642 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.642 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.642 12:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.642 12:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.642 12:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.901 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.901 "name": "Existed_Raid", 00:09:22.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.901 "strip_size_kb": 0, 00:09:22.901 "state": "configuring", 00:09:22.901 "raid_level": "raid1", 00:09:22.901 "superblock": false, 00:09:22.901 "num_base_bdevs": 3, 00:09:22.901 "num_base_bdevs_discovered": 2, 00:09:22.901 "num_base_bdevs_operational": 3, 00:09:22.901 "base_bdevs_list": [ 00:09:22.901 { 00:09:22.901 "name": "BaseBdev1", 00:09:22.901 "uuid": "34db4804-877a-4f61-8a50-9a30457acaaf", 00:09:22.901 "is_configured": true, 00:09:22.901 "data_offset": 0, 00:09:22.901 "data_size": 65536 00:09:22.901 }, 00:09:22.901 { 00:09:22.901 "name": "BaseBdev2", 00:09:22.901 "uuid": "54518159-1b6e-4241-b130-9aad7b307b36", 00:09:22.901 "is_configured": true, 00:09:22.901 "data_offset": 0, 00:09:22.901 "data_size": 65536 00:09:22.901 }, 00:09:22.901 { 00:09:22.901 "name": "BaseBdev3", 00:09:22.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.901 "is_configured": false, 00:09:22.901 "data_offset": 0, 00:09:22.901 "data_size": 0 00:09:22.901 } 00:09:22.901 ] 00:09:22.901 }' 00:09:22.901 12:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.901 12:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.160 [2024-11-19 12:29:28.361361] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:23.160 [2024-11-19 12:29:28.361416] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:23.160 [2024-11-19 12:29:28.361426] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:23.160 [2024-11-19 12:29:28.361713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:23.160 [2024-11-19 12:29:28.361878] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:23.160 [2024-11-19 12:29:28.361889] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:23.160 [2024-11-19 12:29:28.362086] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:23.160 BaseBdev3 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.160 [ 00:09:23.160 { 00:09:23.160 "name": "BaseBdev3", 00:09:23.160 "aliases": [ 00:09:23.160 "906057a4-b286-4e9b-a341-8dfd48298cee" 00:09:23.160 ], 00:09:23.160 "product_name": "Malloc disk", 00:09:23.160 "block_size": 512, 00:09:23.160 "num_blocks": 65536, 00:09:23.160 "uuid": "906057a4-b286-4e9b-a341-8dfd48298cee", 00:09:23.160 "assigned_rate_limits": { 00:09:23.160 "rw_ios_per_sec": 0, 00:09:23.160 "rw_mbytes_per_sec": 0, 00:09:23.160 "r_mbytes_per_sec": 0, 00:09:23.160 "w_mbytes_per_sec": 0 00:09:23.160 }, 00:09:23.160 "claimed": true, 00:09:23.160 "claim_type": "exclusive_write", 00:09:23.160 "zoned": false, 00:09:23.160 "supported_io_types": { 00:09:23.160 "read": true, 00:09:23.160 "write": true, 00:09:23.160 "unmap": true, 00:09:23.160 "flush": true, 00:09:23.160 "reset": true, 00:09:23.160 "nvme_admin": false, 00:09:23.160 "nvme_io": false, 00:09:23.160 "nvme_io_md": false, 00:09:23.160 "write_zeroes": true, 00:09:23.160 "zcopy": true, 00:09:23.160 "get_zone_info": false, 00:09:23.160 "zone_management": false, 00:09:23.160 "zone_append": false, 00:09:23.160 "compare": false, 00:09:23.160 "compare_and_write": false, 00:09:23.160 "abort": true, 00:09:23.160 "seek_hole": false, 00:09:23.160 "seek_data": false, 00:09:23.160 "copy": true, 00:09:23.160 "nvme_iov_md": false 00:09:23.160 }, 00:09:23.160 "memory_domains": [ 00:09:23.160 { 00:09:23.160 "dma_device_id": "system", 00:09:23.160 "dma_device_type": 1 00:09:23.160 }, 00:09:23.160 { 00:09:23.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.160 "dma_device_type": 2 00:09:23.160 } 00:09:23.160 ], 00:09:23.160 "driver_specific": {} 00:09:23.160 } 00:09:23.160 ] 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.160 12:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.419 12:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.419 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.419 "name": "Existed_Raid", 00:09:23.419 "uuid": "7ff92fa8-c58d-49c1-8797-48e8774ad7f9", 00:09:23.419 "strip_size_kb": 0, 00:09:23.419 "state": "online", 00:09:23.419 "raid_level": "raid1", 00:09:23.419 "superblock": false, 00:09:23.419 "num_base_bdevs": 3, 00:09:23.419 "num_base_bdevs_discovered": 3, 00:09:23.419 "num_base_bdevs_operational": 3, 00:09:23.419 "base_bdevs_list": [ 00:09:23.419 { 00:09:23.419 "name": "BaseBdev1", 00:09:23.419 "uuid": "34db4804-877a-4f61-8a50-9a30457acaaf", 00:09:23.419 "is_configured": true, 00:09:23.419 "data_offset": 0, 00:09:23.419 "data_size": 65536 00:09:23.419 }, 00:09:23.419 { 00:09:23.419 "name": "BaseBdev2", 00:09:23.419 "uuid": "54518159-1b6e-4241-b130-9aad7b307b36", 00:09:23.419 "is_configured": true, 00:09:23.419 "data_offset": 0, 00:09:23.419 "data_size": 65536 00:09:23.419 }, 00:09:23.419 { 00:09:23.419 "name": "BaseBdev3", 00:09:23.419 "uuid": "906057a4-b286-4e9b-a341-8dfd48298cee", 00:09:23.419 "is_configured": true, 00:09:23.419 "data_offset": 0, 00:09:23.419 "data_size": 65536 00:09:23.419 } 00:09:23.419 ] 00:09:23.419 }' 00:09:23.419 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.419 12:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.678 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:23.678 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:23.678 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:23.678 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:23.678 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:23.678 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:23.678 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:23.678 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:23.678 12:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.678 12:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.678 [2024-11-19 12:29:28.848899] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:23.678 12:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.678 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:23.678 "name": "Existed_Raid", 00:09:23.678 "aliases": [ 00:09:23.678 "7ff92fa8-c58d-49c1-8797-48e8774ad7f9" 00:09:23.678 ], 00:09:23.678 "product_name": "Raid Volume", 00:09:23.678 "block_size": 512, 00:09:23.678 "num_blocks": 65536, 00:09:23.678 "uuid": "7ff92fa8-c58d-49c1-8797-48e8774ad7f9", 00:09:23.678 "assigned_rate_limits": { 00:09:23.678 "rw_ios_per_sec": 0, 00:09:23.678 "rw_mbytes_per_sec": 0, 00:09:23.678 "r_mbytes_per_sec": 0, 00:09:23.678 "w_mbytes_per_sec": 0 00:09:23.678 }, 00:09:23.678 "claimed": false, 00:09:23.678 "zoned": false, 00:09:23.678 "supported_io_types": { 00:09:23.678 "read": true, 00:09:23.678 "write": true, 00:09:23.678 "unmap": false, 00:09:23.678 "flush": false, 00:09:23.678 "reset": true, 00:09:23.678 "nvme_admin": false, 00:09:23.678 "nvme_io": false, 00:09:23.678 "nvme_io_md": false, 00:09:23.678 "write_zeroes": true, 00:09:23.678 "zcopy": false, 00:09:23.678 "get_zone_info": false, 00:09:23.678 "zone_management": false, 00:09:23.678 "zone_append": false, 00:09:23.678 "compare": false, 00:09:23.678 "compare_and_write": false, 00:09:23.678 "abort": false, 00:09:23.678 "seek_hole": false, 00:09:23.678 "seek_data": false, 00:09:23.678 "copy": false, 00:09:23.678 "nvme_iov_md": false 00:09:23.678 }, 00:09:23.679 "memory_domains": [ 00:09:23.679 { 00:09:23.679 "dma_device_id": "system", 00:09:23.679 "dma_device_type": 1 00:09:23.679 }, 00:09:23.679 { 00:09:23.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.679 "dma_device_type": 2 00:09:23.679 }, 00:09:23.679 { 00:09:23.679 "dma_device_id": "system", 00:09:23.679 "dma_device_type": 1 00:09:23.679 }, 00:09:23.679 { 00:09:23.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.679 "dma_device_type": 2 00:09:23.679 }, 00:09:23.679 { 00:09:23.679 "dma_device_id": "system", 00:09:23.679 "dma_device_type": 1 00:09:23.679 }, 00:09:23.679 { 00:09:23.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.679 "dma_device_type": 2 00:09:23.679 } 00:09:23.679 ], 00:09:23.679 "driver_specific": { 00:09:23.679 "raid": { 00:09:23.679 "uuid": "7ff92fa8-c58d-49c1-8797-48e8774ad7f9", 00:09:23.679 "strip_size_kb": 0, 00:09:23.679 "state": "online", 00:09:23.679 "raid_level": "raid1", 00:09:23.679 "superblock": false, 00:09:23.679 "num_base_bdevs": 3, 00:09:23.679 "num_base_bdevs_discovered": 3, 00:09:23.679 "num_base_bdevs_operational": 3, 00:09:23.679 "base_bdevs_list": [ 00:09:23.679 { 00:09:23.679 "name": "BaseBdev1", 00:09:23.679 "uuid": "34db4804-877a-4f61-8a50-9a30457acaaf", 00:09:23.679 "is_configured": true, 00:09:23.679 "data_offset": 0, 00:09:23.679 "data_size": 65536 00:09:23.679 }, 00:09:23.679 { 00:09:23.679 "name": "BaseBdev2", 00:09:23.679 "uuid": "54518159-1b6e-4241-b130-9aad7b307b36", 00:09:23.679 "is_configured": true, 00:09:23.679 "data_offset": 0, 00:09:23.679 "data_size": 65536 00:09:23.679 }, 00:09:23.679 { 00:09:23.679 "name": "BaseBdev3", 00:09:23.679 "uuid": "906057a4-b286-4e9b-a341-8dfd48298cee", 00:09:23.679 "is_configured": true, 00:09:23.679 "data_offset": 0, 00:09:23.679 "data_size": 65536 00:09:23.679 } 00:09:23.679 ] 00:09:23.679 } 00:09:23.679 } 00:09:23.679 }' 00:09:23.679 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:23.679 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:23.679 BaseBdev2 00:09:23.679 BaseBdev3' 00:09:23.679 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.938 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:23.938 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.938 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:23.938 12:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.938 12:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.938 12:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.938 12:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.938 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.938 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.938 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.938 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.938 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:23.938 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.938 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.939 [2024-11-19 12:29:29.132218] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.939 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.197 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.197 "name": "Existed_Raid", 00:09:24.197 "uuid": "7ff92fa8-c58d-49c1-8797-48e8774ad7f9", 00:09:24.197 "strip_size_kb": 0, 00:09:24.197 "state": "online", 00:09:24.197 "raid_level": "raid1", 00:09:24.197 "superblock": false, 00:09:24.197 "num_base_bdevs": 3, 00:09:24.197 "num_base_bdevs_discovered": 2, 00:09:24.197 "num_base_bdevs_operational": 2, 00:09:24.197 "base_bdevs_list": [ 00:09:24.197 { 00:09:24.197 "name": null, 00:09:24.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.197 "is_configured": false, 00:09:24.197 "data_offset": 0, 00:09:24.197 "data_size": 65536 00:09:24.197 }, 00:09:24.197 { 00:09:24.197 "name": "BaseBdev2", 00:09:24.198 "uuid": "54518159-1b6e-4241-b130-9aad7b307b36", 00:09:24.198 "is_configured": true, 00:09:24.198 "data_offset": 0, 00:09:24.198 "data_size": 65536 00:09:24.198 }, 00:09:24.198 { 00:09:24.198 "name": "BaseBdev3", 00:09:24.198 "uuid": "906057a4-b286-4e9b-a341-8dfd48298cee", 00:09:24.198 "is_configured": true, 00:09:24.198 "data_offset": 0, 00:09:24.198 "data_size": 65536 00:09:24.198 } 00:09:24.198 ] 00:09:24.198 }' 00:09:24.198 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.198 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.456 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:24.456 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:24.456 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.456 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:24.456 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.456 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.456 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.456 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:24.456 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:24.456 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:24.456 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.456 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.456 [2024-11-19 12:29:29.658619] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:24.456 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.456 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:24.456 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:24.456 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.456 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:24.456 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.456 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.456 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.714 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:24.714 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:24.714 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:24.714 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.714 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.714 [2024-11-19 12:29:29.729831] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:24.714 [2024-11-19 12:29:29.729932] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:24.714 [2024-11-19 12:29:29.741492] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:24.714 [2024-11-19 12:29:29.741608] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:24.714 [2024-11-19 12:29:29.741654] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:24.714 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.714 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:24.714 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:24.714 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.714 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.714 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:24.714 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.714 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.714 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:24.714 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:24.714 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:24.714 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:24.714 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:24.714 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:24.714 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.714 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.714 BaseBdev2 00:09:24.714 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.714 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:24.714 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.715 [ 00:09:24.715 { 00:09:24.715 "name": "BaseBdev2", 00:09:24.715 "aliases": [ 00:09:24.715 "25f1dc4f-7be5-4b53-977f-19ddcbf563dc" 00:09:24.715 ], 00:09:24.715 "product_name": "Malloc disk", 00:09:24.715 "block_size": 512, 00:09:24.715 "num_blocks": 65536, 00:09:24.715 "uuid": "25f1dc4f-7be5-4b53-977f-19ddcbf563dc", 00:09:24.715 "assigned_rate_limits": { 00:09:24.715 "rw_ios_per_sec": 0, 00:09:24.715 "rw_mbytes_per_sec": 0, 00:09:24.715 "r_mbytes_per_sec": 0, 00:09:24.715 "w_mbytes_per_sec": 0 00:09:24.715 }, 00:09:24.715 "claimed": false, 00:09:24.715 "zoned": false, 00:09:24.715 "supported_io_types": { 00:09:24.715 "read": true, 00:09:24.715 "write": true, 00:09:24.715 "unmap": true, 00:09:24.715 "flush": true, 00:09:24.715 "reset": true, 00:09:24.715 "nvme_admin": false, 00:09:24.715 "nvme_io": false, 00:09:24.715 "nvme_io_md": false, 00:09:24.715 "write_zeroes": true, 00:09:24.715 "zcopy": true, 00:09:24.715 "get_zone_info": false, 00:09:24.715 "zone_management": false, 00:09:24.715 "zone_append": false, 00:09:24.715 "compare": false, 00:09:24.715 "compare_and_write": false, 00:09:24.715 "abort": true, 00:09:24.715 "seek_hole": false, 00:09:24.715 "seek_data": false, 00:09:24.715 "copy": true, 00:09:24.715 "nvme_iov_md": false 00:09:24.715 }, 00:09:24.715 "memory_domains": [ 00:09:24.715 { 00:09:24.715 "dma_device_id": "system", 00:09:24.715 "dma_device_type": 1 00:09:24.715 }, 00:09:24.715 { 00:09:24.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.715 "dma_device_type": 2 00:09:24.715 } 00:09:24.715 ], 00:09:24.715 "driver_specific": {} 00:09:24.715 } 00:09:24.715 ] 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.715 BaseBdev3 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.715 [ 00:09:24.715 { 00:09:24.715 "name": "BaseBdev3", 00:09:24.715 "aliases": [ 00:09:24.715 "2cdfbcbf-6283-4b71-b5d1-c553fc063507" 00:09:24.715 ], 00:09:24.715 "product_name": "Malloc disk", 00:09:24.715 "block_size": 512, 00:09:24.715 "num_blocks": 65536, 00:09:24.715 "uuid": "2cdfbcbf-6283-4b71-b5d1-c553fc063507", 00:09:24.715 "assigned_rate_limits": { 00:09:24.715 "rw_ios_per_sec": 0, 00:09:24.715 "rw_mbytes_per_sec": 0, 00:09:24.715 "r_mbytes_per_sec": 0, 00:09:24.715 "w_mbytes_per_sec": 0 00:09:24.715 }, 00:09:24.715 "claimed": false, 00:09:24.715 "zoned": false, 00:09:24.715 "supported_io_types": { 00:09:24.715 "read": true, 00:09:24.715 "write": true, 00:09:24.715 "unmap": true, 00:09:24.715 "flush": true, 00:09:24.715 "reset": true, 00:09:24.715 "nvme_admin": false, 00:09:24.715 "nvme_io": false, 00:09:24.715 "nvme_io_md": false, 00:09:24.715 "write_zeroes": true, 00:09:24.715 "zcopy": true, 00:09:24.715 "get_zone_info": false, 00:09:24.715 "zone_management": false, 00:09:24.715 "zone_append": false, 00:09:24.715 "compare": false, 00:09:24.715 "compare_and_write": false, 00:09:24.715 "abort": true, 00:09:24.715 "seek_hole": false, 00:09:24.715 "seek_data": false, 00:09:24.715 "copy": true, 00:09:24.715 "nvme_iov_md": false 00:09:24.715 }, 00:09:24.715 "memory_domains": [ 00:09:24.715 { 00:09:24.715 "dma_device_id": "system", 00:09:24.715 "dma_device_type": 1 00:09:24.715 }, 00:09:24.715 { 00:09:24.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.715 "dma_device_type": 2 00:09:24.715 } 00:09:24.715 ], 00:09:24.715 "driver_specific": {} 00:09:24.715 } 00:09:24.715 ] 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.715 [2024-11-19 12:29:29.910359] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:24.715 [2024-11-19 12:29:29.910492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:24.715 [2024-11-19 12:29:29.910552] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:24.715 [2024-11-19 12:29:29.912482] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.715 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.716 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.716 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.716 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.716 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.716 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.716 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.716 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.716 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.716 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.716 "name": "Existed_Raid", 00:09:24.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.716 "strip_size_kb": 0, 00:09:24.716 "state": "configuring", 00:09:24.716 "raid_level": "raid1", 00:09:24.716 "superblock": false, 00:09:24.716 "num_base_bdevs": 3, 00:09:24.716 "num_base_bdevs_discovered": 2, 00:09:24.716 "num_base_bdevs_operational": 3, 00:09:24.716 "base_bdevs_list": [ 00:09:24.716 { 00:09:24.716 "name": "BaseBdev1", 00:09:24.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.716 "is_configured": false, 00:09:24.716 "data_offset": 0, 00:09:24.716 "data_size": 0 00:09:24.716 }, 00:09:24.716 { 00:09:24.716 "name": "BaseBdev2", 00:09:24.716 "uuid": "25f1dc4f-7be5-4b53-977f-19ddcbf563dc", 00:09:24.716 "is_configured": true, 00:09:24.716 "data_offset": 0, 00:09:24.716 "data_size": 65536 00:09:24.716 }, 00:09:24.716 { 00:09:24.716 "name": "BaseBdev3", 00:09:24.716 "uuid": "2cdfbcbf-6283-4b71-b5d1-c553fc063507", 00:09:24.716 "is_configured": true, 00:09:24.716 "data_offset": 0, 00:09:24.716 "data_size": 65536 00:09:24.716 } 00:09:24.716 ] 00:09:24.716 }' 00:09:24.716 12:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.716 12:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.284 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:25.284 12:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.284 12:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.284 [2024-11-19 12:29:30.385577] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:25.284 12:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.284 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:25.284 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.284 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.284 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.284 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.284 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.284 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.284 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.284 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.284 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.284 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.284 12:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.284 12:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.284 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.284 12:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.284 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.284 "name": "Existed_Raid", 00:09:25.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.284 "strip_size_kb": 0, 00:09:25.284 "state": "configuring", 00:09:25.284 "raid_level": "raid1", 00:09:25.285 "superblock": false, 00:09:25.285 "num_base_bdevs": 3, 00:09:25.285 "num_base_bdevs_discovered": 1, 00:09:25.285 "num_base_bdevs_operational": 3, 00:09:25.285 "base_bdevs_list": [ 00:09:25.285 { 00:09:25.285 "name": "BaseBdev1", 00:09:25.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.285 "is_configured": false, 00:09:25.285 "data_offset": 0, 00:09:25.285 "data_size": 0 00:09:25.285 }, 00:09:25.285 { 00:09:25.285 "name": null, 00:09:25.285 "uuid": "25f1dc4f-7be5-4b53-977f-19ddcbf563dc", 00:09:25.285 "is_configured": false, 00:09:25.285 "data_offset": 0, 00:09:25.285 "data_size": 65536 00:09:25.285 }, 00:09:25.285 { 00:09:25.285 "name": "BaseBdev3", 00:09:25.285 "uuid": "2cdfbcbf-6283-4b71-b5d1-c553fc063507", 00:09:25.285 "is_configured": true, 00:09:25.285 "data_offset": 0, 00:09:25.285 "data_size": 65536 00:09:25.285 } 00:09:25.285 ] 00:09:25.285 }' 00:09:25.285 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.285 12:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.852 [2024-11-19 12:29:30.899607] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:25.852 BaseBdev1 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.852 [ 00:09:25.852 { 00:09:25.852 "name": "BaseBdev1", 00:09:25.852 "aliases": [ 00:09:25.852 "37c368e7-acdc-4aa7-86c9-7c517f7f9627" 00:09:25.852 ], 00:09:25.852 "product_name": "Malloc disk", 00:09:25.852 "block_size": 512, 00:09:25.852 "num_blocks": 65536, 00:09:25.852 "uuid": "37c368e7-acdc-4aa7-86c9-7c517f7f9627", 00:09:25.852 "assigned_rate_limits": { 00:09:25.852 "rw_ios_per_sec": 0, 00:09:25.852 "rw_mbytes_per_sec": 0, 00:09:25.852 "r_mbytes_per_sec": 0, 00:09:25.852 "w_mbytes_per_sec": 0 00:09:25.852 }, 00:09:25.852 "claimed": true, 00:09:25.852 "claim_type": "exclusive_write", 00:09:25.852 "zoned": false, 00:09:25.852 "supported_io_types": { 00:09:25.852 "read": true, 00:09:25.852 "write": true, 00:09:25.852 "unmap": true, 00:09:25.852 "flush": true, 00:09:25.852 "reset": true, 00:09:25.852 "nvme_admin": false, 00:09:25.852 "nvme_io": false, 00:09:25.852 "nvme_io_md": false, 00:09:25.852 "write_zeroes": true, 00:09:25.852 "zcopy": true, 00:09:25.852 "get_zone_info": false, 00:09:25.852 "zone_management": false, 00:09:25.852 "zone_append": false, 00:09:25.852 "compare": false, 00:09:25.852 "compare_and_write": false, 00:09:25.852 "abort": true, 00:09:25.852 "seek_hole": false, 00:09:25.852 "seek_data": false, 00:09:25.852 "copy": true, 00:09:25.852 "nvme_iov_md": false 00:09:25.852 }, 00:09:25.852 "memory_domains": [ 00:09:25.852 { 00:09:25.852 "dma_device_id": "system", 00:09:25.852 "dma_device_type": 1 00:09:25.852 }, 00:09:25.852 { 00:09:25.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.852 "dma_device_type": 2 00:09:25.852 } 00:09:25.852 ], 00:09:25.852 "driver_specific": {} 00:09:25.852 } 00:09:25.852 ] 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.852 12:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.853 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.853 "name": "Existed_Raid", 00:09:25.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.853 "strip_size_kb": 0, 00:09:25.853 "state": "configuring", 00:09:25.853 "raid_level": "raid1", 00:09:25.853 "superblock": false, 00:09:25.853 "num_base_bdevs": 3, 00:09:25.853 "num_base_bdevs_discovered": 2, 00:09:25.853 "num_base_bdevs_operational": 3, 00:09:25.853 "base_bdevs_list": [ 00:09:25.853 { 00:09:25.853 "name": "BaseBdev1", 00:09:25.853 "uuid": "37c368e7-acdc-4aa7-86c9-7c517f7f9627", 00:09:25.853 "is_configured": true, 00:09:25.853 "data_offset": 0, 00:09:25.853 "data_size": 65536 00:09:25.853 }, 00:09:25.853 { 00:09:25.853 "name": null, 00:09:25.853 "uuid": "25f1dc4f-7be5-4b53-977f-19ddcbf563dc", 00:09:25.853 "is_configured": false, 00:09:25.853 "data_offset": 0, 00:09:25.853 "data_size": 65536 00:09:25.853 }, 00:09:25.853 { 00:09:25.853 "name": "BaseBdev3", 00:09:25.853 "uuid": "2cdfbcbf-6283-4b71-b5d1-c553fc063507", 00:09:25.853 "is_configured": true, 00:09:25.853 "data_offset": 0, 00:09:25.853 "data_size": 65536 00:09:25.853 } 00:09:25.853 ] 00:09:25.853 }' 00:09:25.853 12:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.853 12:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.420 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.420 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:26.420 12:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.420 12:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.420 12:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.420 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:26.420 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:26.420 12:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.420 12:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.420 [2024-11-19 12:29:31.450810] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:26.420 12:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.420 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:26.420 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.420 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.420 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.420 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.420 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.420 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.420 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.421 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.421 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.421 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.421 12:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.421 12:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.421 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.421 12:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.421 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.421 "name": "Existed_Raid", 00:09:26.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.421 "strip_size_kb": 0, 00:09:26.421 "state": "configuring", 00:09:26.421 "raid_level": "raid1", 00:09:26.421 "superblock": false, 00:09:26.421 "num_base_bdevs": 3, 00:09:26.421 "num_base_bdevs_discovered": 1, 00:09:26.421 "num_base_bdevs_operational": 3, 00:09:26.421 "base_bdevs_list": [ 00:09:26.421 { 00:09:26.421 "name": "BaseBdev1", 00:09:26.421 "uuid": "37c368e7-acdc-4aa7-86c9-7c517f7f9627", 00:09:26.421 "is_configured": true, 00:09:26.421 "data_offset": 0, 00:09:26.421 "data_size": 65536 00:09:26.421 }, 00:09:26.421 { 00:09:26.421 "name": null, 00:09:26.421 "uuid": "25f1dc4f-7be5-4b53-977f-19ddcbf563dc", 00:09:26.421 "is_configured": false, 00:09:26.421 "data_offset": 0, 00:09:26.421 "data_size": 65536 00:09:26.421 }, 00:09:26.421 { 00:09:26.421 "name": null, 00:09:26.421 "uuid": "2cdfbcbf-6283-4b71-b5d1-c553fc063507", 00:09:26.421 "is_configured": false, 00:09:26.421 "data_offset": 0, 00:09:26.421 "data_size": 65536 00:09:26.421 } 00:09:26.421 ] 00:09:26.421 }' 00:09:26.421 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.421 12:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.681 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.681 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:26.681 12:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.681 12:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.681 12:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.681 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:26.941 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:26.941 12:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.941 12:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.941 [2024-11-19 12:29:31.946032] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:26.941 12:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.941 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:26.941 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.941 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.941 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.941 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.941 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.941 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.941 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.941 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.941 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.941 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.941 12:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.941 12:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.941 12:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.941 12:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.941 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.941 "name": "Existed_Raid", 00:09:26.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.941 "strip_size_kb": 0, 00:09:26.941 "state": "configuring", 00:09:26.941 "raid_level": "raid1", 00:09:26.941 "superblock": false, 00:09:26.941 "num_base_bdevs": 3, 00:09:26.941 "num_base_bdevs_discovered": 2, 00:09:26.941 "num_base_bdevs_operational": 3, 00:09:26.941 "base_bdevs_list": [ 00:09:26.941 { 00:09:26.941 "name": "BaseBdev1", 00:09:26.941 "uuid": "37c368e7-acdc-4aa7-86c9-7c517f7f9627", 00:09:26.941 "is_configured": true, 00:09:26.941 "data_offset": 0, 00:09:26.941 "data_size": 65536 00:09:26.941 }, 00:09:26.941 { 00:09:26.941 "name": null, 00:09:26.941 "uuid": "25f1dc4f-7be5-4b53-977f-19ddcbf563dc", 00:09:26.941 "is_configured": false, 00:09:26.941 "data_offset": 0, 00:09:26.941 "data_size": 65536 00:09:26.941 }, 00:09:26.941 { 00:09:26.941 "name": "BaseBdev3", 00:09:26.941 "uuid": "2cdfbcbf-6283-4b71-b5d1-c553fc063507", 00:09:26.941 "is_configured": true, 00:09:26.941 "data_offset": 0, 00:09:26.941 "data_size": 65536 00:09:26.941 } 00:09:26.941 ] 00:09:26.941 }' 00:09:26.941 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.941 12:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.202 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.202 12:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.202 12:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.202 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:27.202 12:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.202 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:27.202 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:27.202 12:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.202 12:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.202 [2024-11-19 12:29:32.437152] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:27.202 12:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.202 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:27.202 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.202 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.202 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.202 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.202 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.202 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.202 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.202 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.202 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.202 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.202 12:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.202 12:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.202 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.461 12:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.461 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.461 "name": "Existed_Raid", 00:09:27.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.461 "strip_size_kb": 0, 00:09:27.461 "state": "configuring", 00:09:27.461 "raid_level": "raid1", 00:09:27.461 "superblock": false, 00:09:27.461 "num_base_bdevs": 3, 00:09:27.461 "num_base_bdevs_discovered": 1, 00:09:27.461 "num_base_bdevs_operational": 3, 00:09:27.461 "base_bdevs_list": [ 00:09:27.461 { 00:09:27.461 "name": null, 00:09:27.461 "uuid": "37c368e7-acdc-4aa7-86c9-7c517f7f9627", 00:09:27.461 "is_configured": false, 00:09:27.461 "data_offset": 0, 00:09:27.461 "data_size": 65536 00:09:27.461 }, 00:09:27.461 { 00:09:27.461 "name": null, 00:09:27.461 "uuid": "25f1dc4f-7be5-4b53-977f-19ddcbf563dc", 00:09:27.461 "is_configured": false, 00:09:27.461 "data_offset": 0, 00:09:27.461 "data_size": 65536 00:09:27.461 }, 00:09:27.461 { 00:09:27.461 "name": "BaseBdev3", 00:09:27.461 "uuid": "2cdfbcbf-6283-4b71-b5d1-c553fc063507", 00:09:27.461 "is_configured": true, 00:09:27.461 "data_offset": 0, 00:09:27.461 "data_size": 65536 00:09:27.461 } 00:09:27.461 ] 00:09:27.461 }' 00:09:27.461 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.461 12:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.721 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.721 12:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.721 12:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.721 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:27.721 12:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.721 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:27.721 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:27.721 12:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.721 12:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.721 [2024-11-19 12:29:32.938891] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:27.721 12:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.721 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:27.721 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.721 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.721 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.721 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.721 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.721 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.721 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.721 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.721 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.721 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.721 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.721 12:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.721 12:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.721 12:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.981 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.981 "name": "Existed_Raid", 00:09:27.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.981 "strip_size_kb": 0, 00:09:27.981 "state": "configuring", 00:09:27.981 "raid_level": "raid1", 00:09:27.981 "superblock": false, 00:09:27.981 "num_base_bdevs": 3, 00:09:27.981 "num_base_bdevs_discovered": 2, 00:09:27.981 "num_base_bdevs_operational": 3, 00:09:27.981 "base_bdevs_list": [ 00:09:27.981 { 00:09:27.981 "name": null, 00:09:27.981 "uuid": "37c368e7-acdc-4aa7-86c9-7c517f7f9627", 00:09:27.981 "is_configured": false, 00:09:27.981 "data_offset": 0, 00:09:27.981 "data_size": 65536 00:09:27.981 }, 00:09:27.981 { 00:09:27.981 "name": "BaseBdev2", 00:09:27.981 "uuid": "25f1dc4f-7be5-4b53-977f-19ddcbf563dc", 00:09:27.981 "is_configured": true, 00:09:27.981 "data_offset": 0, 00:09:27.981 "data_size": 65536 00:09:27.981 }, 00:09:27.981 { 00:09:27.981 "name": "BaseBdev3", 00:09:27.981 "uuid": "2cdfbcbf-6283-4b71-b5d1-c553fc063507", 00:09:27.981 "is_configured": true, 00:09:27.981 "data_offset": 0, 00:09:27.981 "data_size": 65536 00:09:27.981 } 00:09:27.981 ] 00:09:27.981 }' 00:09:27.981 12:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.981 12:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.240 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:28.240 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.240 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.240 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.240 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.240 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:28.240 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.240 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:28.240 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.240 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.240 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.240 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 37c368e7-acdc-4aa7-86c9-7c517f7f9627 00:09:28.240 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.240 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.240 [2024-11-19 12:29:33.496978] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:28.241 [2024-11-19 12:29:33.497103] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:28.241 [2024-11-19 12:29:33.497127] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:28.241 [2024-11-19 12:29:33.497397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:28.241 [2024-11-19 12:29:33.497569] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:28.241 [2024-11-19 12:29:33.497613] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:28.241 [2024-11-19 12:29:33.497836] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.241 NewBaseBdev 00:09:28.241 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.500 [ 00:09:28.500 { 00:09:28.500 "name": "NewBaseBdev", 00:09:28.500 "aliases": [ 00:09:28.500 "37c368e7-acdc-4aa7-86c9-7c517f7f9627" 00:09:28.500 ], 00:09:28.500 "product_name": "Malloc disk", 00:09:28.500 "block_size": 512, 00:09:28.500 "num_blocks": 65536, 00:09:28.500 "uuid": "37c368e7-acdc-4aa7-86c9-7c517f7f9627", 00:09:28.500 "assigned_rate_limits": { 00:09:28.500 "rw_ios_per_sec": 0, 00:09:28.500 "rw_mbytes_per_sec": 0, 00:09:28.500 "r_mbytes_per_sec": 0, 00:09:28.500 "w_mbytes_per_sec": 0 00:09:28.500 }, 00:09:28.500 "claimed": true, 00:09:28.500 "claim_type": "exclusive_write", 00:09:28.500 "zoned": false, 00:09:28.500 "supported_io_types": { 00:09:28.500 "read": true, 00:09:28.500 "write": true, 00:09:28.500 "unmap": true, 00:09:28.500 "flush": true, 00:09:28.500 "reset": true, 00:09:28.500 "nvme_admin": false, 00:09:28.500 "nvme_io": false, 00:09:28.500 "nvme_io_md": false, 00:09:28.500 "write_zeroes": true, 00:09:28.500 "zcopy": true, 00:09:28.500 "get_zone_info": false, 00:09:28.500 "zone_management": false, 00:09:28.500 "zone_append": false, 00:09:28.500 "compare": false, 00:09:28.500 "compare_and_write": false, 00:09:28.500 "abort": true, 00:09:28.500 "seek_hole": false, 00:09:28.500 "seek_data": false, 00:09:28.500 "copy": true, 00:09:28.500 "nvme_iov_md": false 00:09:28.500 }, 00:09:28.500 "memory_domains": [ 00:09:28.500 { 00:09:28.500 "dma_device_id": "system", 00:09:28.500 "dma_device_type": 1 00:09:28.500 }, 00:09:28.500 { 00:09:28.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.500 "dma_device_type": 2 00:09:28.500 } 00:09:28.500 ], 00:09:28.500 "driver_specific": {} 00:09:28.500 } 00:09:28.500 ] 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.500 "name": "Existed_Raid", 00:09:28.500 "uuid": "137b10f8-f658-4c18-8057-26450dc08d8c", 00:09:28.500 "strip_size_kb": 0, 00:09:28.500 "state": "online", 00:09:28.500 "raid_level": "raid1", 00:09:28.500 "superblock": false, 00:09:28.500 "num_base_bdevs": 3, 00:09:28.500 "num_base_bdevs_discovered": 3, 00:09:28.500 "num_base_bdevs_operational": 3, 00:09:28.500 "base_bdevs_list": [ 00:09:28.500 { 00:09:28.500 "name": "NewBaseBdev", 00:09:28.500 "uuid": "37c368e7-acdc-4aa7-86c9-7c517f7f9627", 00:09:28.500 "is_configured": true, 00:09:28.500 "data_offset": 0, 00:09:28.500 "data_size": 65536 00:09:28.500 }, 00:09:28.500 { 00:09:28.500 "name": "BaseBdev2", 00:09:28.500 "uuid": "25f1dc4f-7be5-4b53-977f-19ddcbf563dc", 00:09:28.500 "is_configured": true, 00:09:28.500 "data_offset": 0, 00:09:28.500 "data_size": 65536 00:09:28.500 }, 00:09:28.500 { 00:09:28.500 "name": "BaseBdev3", 00:09:28.500 "uuid": "2cdfbcbf-6283-4b71-b5d1-c553fc063507", 00:09:28.500 "is_configured": true, 00:09:28.500 "data_offset": 0, 00:09:28.500 "data_size": 65536 00:09:28.500 } 00:09:28.500 ] 00:09:28.500 }' 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.500 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.760 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:28.760 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:28.761 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:28.761 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:28.761 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:28.761 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:28.761 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:28.761 12:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:28.761 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.761 12:29:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.761 [2024-11-19 12:29:33.996541] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:28.761 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.021 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:29.021 "name": "Existed_Raid", 00:09:29.021 "aliases": [ 00:09:29.021 "137b10f8-f658-4c18-8057-26450dc08d8c" 00:09:29.021 ], 00:09:29.021 "product_name": "Raid Volume", 00:09:29.021 "block_size": 512, 00:09:29.021 "num_blocks": 65536, 00:09:29.021 "uuid": "137b10f8-f658-4c18-8057-26450dc08d8c", 00:09:29.021 "assigned_rate_limits": { 00:09:29.021 "rw_ios_per_sec": 0, 00:09:29.021 "rw_mbytes_per_sec": 0, 00:09:29.021 "r_mbytes_per_sec": 0, 00:09:29.021 "w_mbytes_per_sec": 0 00:09:29.021 }, 00:09:29.021 "claimed": false, 00:09:29.021 "zoned": false, 00:09:29.021 "supported_io_types": { 00:09:29.021 "read": true, 00:09:29.021 "write": true, 00:09:29.021 "unmap": false, 00:09:29.021 "flush": false, 00:09:29.021 "reset": true, 00:09:29.021 "nvme_admin": false, 00:09:29.021 "nvme_io": false, 00:09:29.021 "nvme_io_md": false, 00:09:29.021 "write_zeroes": true, 00:09:29.021 "zcopy": false, 00:09:29.021 "get_zone_info": false, 00:09:29.021 "zone_management": false, 00:09:29.021 "zone_append": false, 00:09:29.021 "compare": false, 00:09:29.021 "compare_and_write": false, 00:09:29.021 "abort": false, 00:09:29.021 "seek_hole": false, 00:09:29.021 "seek_data": false, 00:09:29.021 "copy": false, 00:09:29.021 "nvme_iov_md": false 00:09:29.021 }, 00:09:29.021 "memory_domains": [ 00:09:29.021 { 00:09:29.021 "dma_device_id": "system", 00:09:29.021 "dma_device_type": 1 00:09:29.021 }, 00:09:29.021 { 00:09:29.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.021 "dma_device_type": 2 00:09:29.021 }, 00:09:29.021 { 00:09:29.021 "dma_device_id": "system", 00:09:29.021 "dma_device_type": 1 00:09:29.021 }, 00:09:29.021 { 00:09:29.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.021 "dma_device_type": 2 00:09:29.021 }, 00:09:29.021 { 00:09:29.021 "dma_device_id": "system", 00:09:29.021 "dma_device_type": 1 00:09:29.021 }, 00:09:29.021 { 00:09:29.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.021 "dma_device_type": 2 00:09:29.021 } 00:09:29.021 ], 00:09:29.021 "driver_specific": { 00:09:29.021 "raid": { 00:09:29.021 "uuid": "137b10f8-f658-4c18-8057-26450dc08d8c", 00:09:29.021 "strip_size_kb": 0, 00:09:29.021 "state": "online", 00:09:29.021 "raid_level": "raid1", 00:09:29.021 "superblock": false, 00:09:29.021 "num_base_bdevs": 3, 00:09:29.021 "num_base_bdevs_discovered": 3, 00:09:29.021 "num_base_bdevs_operational": 3, 00:09:29.021 "base_bdevs_list": [ 00:09:29.021 { 00:09:29.021 "name": "NewBaseBdev", 00:09:29.021 "uuid": "37c368e7-acdc-4aa7-86c9-7c517f7f9627", 00:09:29.021 "is_configured": true, 00:09:29.021 "data_offset": 0, 00:09:29.021 "data_size": 65536 00:09:29.021 }, 00:09:29.021 { 00:09:29.021 "name": "BaseBdev2", 00:09:29.021 "uuid": "25f1dc4f-7be5-4b53-977f-19ddcbf563dc", 00:09:29.021 "is_configured": true, 00:09:29.021 "data_offset": 0, 00:09:29.021 "data_size": 65536 00:09:29.021 }, 00:09:29.021 { 00:09:29.021 "name": "BaseBdev3", 00:09:29.021 "uuid": "2cdfbcbf-6283-4b71-b5d1-c553fc063507", 00:09:29.021 "is_configured": true, 00:09:29.021 "data_offset": 0, 00:09:29.021 "data_size": 65536 00:09:29.021 } 00:09:29.021 ] 00:09:29.021 } 00:09:29.021 } 00:09:29.022 }' 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:29.022 BaseBdev2 00:09:29.022 BaseBdev3' 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.022 [2024-11-19 12:29:34.255811] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:29.022 [2024-11-19 12:29:34.255934] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:29.022 [2024-11-19 12:29:34.256025] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.022 [2024-11-19 12:29:34.256279] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:29.022 [2024-11-19 12:29:34.256293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78628 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 78628 ']' 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 78628 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:29.022 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78628 00:09:29.282 killing process with pid 78628 00:09:29.282 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:29.282 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:29.282 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78628' 00:09:29.282 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 78628 00:09:29.282 [2024-11-19 12:29:34.302770] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:29.282 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 78628 00:09:29.282 [2024-11-19 12:29:34.333025] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:29.544 00:09:29.544 real 0m9.108s 00:09:29.544 user 0m15.508s 00:09:29.544 sys 0m1.882s 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.544 ************************************ 00:09:29.544 END TEST raid_state_function_test 00:09:29.544 ************************************ 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.544 12:29:34 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:29.544 12:29:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:29.544 12:29:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:29.544 12:29:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:29.544 ************************************ 00:09:29.544 START TEST raid_state_function_test_sb 00:09:29.544 ************************************ 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=79233 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79233' 00:09:29.544 Process raid pid: 79233 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 79233 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 79233 ']' 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:29.544 12:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.544 [2024-11-19 12:29:34.751468] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:29.544 [2024-11-19 12:29:34.751694] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.805 [2024-11-19 12:29:34.913121] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.805 [2024-11-19 12:29:34.960811] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.805 [2024-11-19 12:29:35.003196] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.805 [2024-11-19 12:29:35.003313] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.374 12:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:30.374 12:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:30.374 12:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:30.374 12:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.374 12:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.374 [2024-11-19 12:29:35.608612] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:30.374 [2024-11-19 12:29:35.608771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:30.374 [2024-11-19 12:29:35.608790] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:30.374 [2024-11-19 12:29:35.608801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:30.374 [2024-11-19 12:29:35.608807] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:30.374 [2024-11-19 12:29:35.608818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:30.374 12:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.374 12:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:30.374 12:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.374 12:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.374 12:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.374 12:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.374 12:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.374 12:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.374 12:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.374 12:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.374 12:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.374 12:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.374 12:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.374 12:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.374 12:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.634 12:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.634 12:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.634 "name": "Existed_Raid", 00:09:30.634 "uuid": "f4ce6cda-1503-4e50-99b9-e989c0045fb1", 00:09:30.634 "strip_size_kb": 0, 00:09:30.634 "state": "configuring", 00:09:30.634 "raid_level": "raid1", 00:09:30.634 "superblock": true, 00:09:30.634 "num_base_bdevs": 3, 00:09:30.634 "num_base_bdevs_discovered": 0, 00:09:30.634 "num_base_bdevs_operational": 3, 00:09:30.634 "base_bdevs_list": [ 00:09:30.634 { 00:09:30.634 "name": "BaseBdev1", 00:09:30.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.634 "is_configured": false, 00:09:30.634 "data_offset": 0, 00:09:30.634 "data_size": 0 00:09:30.634 }, 00:09:30.634 { 00:09:30.634 "name": "BaseBdev2", 00:09:30.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.634 "is_configured": false, 00:09:30.634 "data_offset": 0, 00:09:30.634 "data_size": 0 00:09:30.634 }, 00:09:30.634 { 00:09:30.634 "name": "BaseBdev3", 00:09:30.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.634 "is_configured": false, 00:09:30.634 "data_offset": 0, 00:09:30.634 "data_size": 0 00:09:30.634 } 00:09:30.634 ] 00:09:30.634 }' 00:09:30.634 12:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.634 12:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.896 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:30.896 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.896 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.896 [2024-11-19 12:29:36.059829] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:30.896 [2024-11-19 12:29:36.059948] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:30.896 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.896 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:30.896 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.896 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.896 [2024-11-19 12:29:36.071857] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:30.896 [2024-11-19 12:29:36.071968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:30.896 [2024-11-19 12:29:36.071996] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:30.896 [2024-11-19 12:29:36.072020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:30.896 [2024-11-19 12:29:36.072039] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:30.896 [2024-11-19 12:29:36.072060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:30.896 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.896 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:30.896 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.896 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.896 [2024-11-19 12:29:36.092975] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:30.896 BaseBdev1 00:09:30.896 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.896 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:30.896 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:30.896 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:30.896 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:30.896 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:30.896 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:30.896 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:30.896 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.896 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.896 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.896 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:30.896 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.896 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.896 [ 00:09:30.896 { 00:09:30.896 "name": "BaseBdev1", 00:09:30.896 "aliases": [ 00:09:30.896 "b2923025-4bf2-4254-ae4b-8b0a0d3b3a6c" 00:09:30.896 ], 00:09:30.896 "product_name": "Malloc disk", 00:09:30.896 "block_size": 512, 00:09:30.896 "num_blocks": 65536, 00:09:30.896 "uuid": "b2923025-4bf2-4254-ae4b-8b0a0d3b3a6c", 00:09:30.896 "assigned_rate_limits": { 00:09:30.896 "rw_ios_per_sec": 0, 00:09:30.897 "rw_mbytes_per_sec": 0, 00:09:30.897 "r_mbytes_per_sec": 0, 00:09:30.897 "w_mbytes_per_sec": 0 00:09:30.897 }, 00:09:30.897 "claimed": true, 00:09:30.897 "claim_type": "exclusive_write", 00:09:30.897 "zoned": false, 00:09:30.897 "supported_io_types": { 00:09:30.897 "read": true, 00:09:30.897 "write": true, 00:09:30.897 "unmap": true, 00:09:30.897 "flush": true, 00:09:30.897 "reset": true, 00:09:30.897 "nvme_admin": false, 00:09:30.897 "nvme_io": false, 00:09:30.897 "nvme_io_md": false, 00:09:30.897 "write_zeroes": true, 00:09:30.897 "zcopy": true, 00:09:30.897 "get_zone_info": false, 00:09:30.897 "zone_management": false, 00:09:30.897 "zone_append": false, 00:09:30.897 "compare": false, 00:09:30.897 "compare_and_write": false, 00:09:30.897 "abort": true, 00:09:30.897 "seek_hole": false, 00:09:30.897 "seek_data": false, 00:09:30.897 "copy": true, 00:09:30.897 "nvme_iov_md": false 00:09:30.897 }, 00:09:30.897 "memory_domains": [ 00:09:30.897 { 00:09:30.897 "dma_device_id": "system", 00:09:30.897 "dma_device_type": 1 00:09:30.897 }, 00:09:30.897 { 00:09:30.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.897 "dma_device_type": 2 00:09:30.897 } 00:09:30.897 ], 00:09:30.897 "driver_specific": {} 00:09:30.897 } 00:09:30.897 ] 00:09:30.897 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.897 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:30.897 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:30.897 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.897 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.897 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.897 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.897 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.897 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.897 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.897 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.897 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.897 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.897 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.897 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.897 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.164 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.164 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.164 "name": "Existed_Raid", 00:09:31.164 "uuid": "f7f1b105-7a64-4fba-b0a5-334942f70ca8", 00:09:31.164 "strip_size_kb": 0, 00:09:31.164 "state": "configuring", 00:09:31.164 "raid_level": "raid1", 00:09:31.164 "superblock": true, 00:09:31.164 "num_base_bdevs": 3, 00:09:31.164 "num_base_bdevs_discovered": 1, 00:09:31.164 "num_base_bdevs_operational": 3, 00:09:31.164 "base_bdevs_list": [ 00:09:31.164 { 00:09:31.164 "name": "BaseBdev1", 00:09:31.164 "uuid": "b2923025-4bf2-4254-ae4b-8b0a0d3b3a6c", 00:09:31.164 "is_configured": true, 00:09:31.164 "data_offset": 2048, 00:09:31.164 "data_size": 63488 00:09:31.164 }, 00:09:31.164 { 00:09:31.164 "name": "BaseBdev2", 00:09:31.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.164 "is_configured": false, 00:09:31.164 "data_offset": 0, 00:09:31.164 "data_size": 0 00:09:31.164 }, 00:09:31.164 { 00:09:31.164 "name": "BaseBdev3", 00:09:31.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.164 "is_configured": false, 00:09:31.164 "data_offset": 0, 00:09:31.164 "data_size": 0 00:09:31.164 } 00:09:31.164 ] 00:09:31.164 }' 00:09:31.164 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.164 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.423 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:31.423 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.423 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.423 [2024-11-19 12:29:36.608184] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:31.423 [2024-11-19 12:29:36.608254] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:31.423 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.423 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:31.423 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.423 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.423 [2024-11-19 12:29:36.616180] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:31.423 [2024-11-19 12:29:36.618127] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:31.423 [2024-11-19 12:29:36.618176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:31.423 [2024-11-19 12:29:36.618187] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:31.423 [2024-11-19 12:29:36.618197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:31.423 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.423 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:31.423 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:31.424 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:31.424 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.424 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.424 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.424 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.424 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.424 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.424 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.424 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.424 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.424 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.424 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.424 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.424 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.424 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.424 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.424 "name": "Existed_Raid", 00:09:31.424 "uuid": "29331ecf-b42e-4570-87c2-d36c574bd2a8", 00:09:31.424 "strip_size_kb": 0, 00:09:31.424 "state": "configuring", 00:09:31.424 "raid_level": "raid1", 00:09:31.424 "superblock": true, 00:09:31.424 "num_base_bdevs": 3, 00:09:31.424 "num_base_bdevs_discovered": 1, 00:09:31.424 "num_base_bdevs_operational": 3, 00:09:31.424 "base_bdevs_list": [ 00:09:31.424 { 00:09:31.424 "name": "BaseBdev1", 00:09:31.424 "uuid": "b2923025-4bf2-4254-ae4b-8b0a0d3b3a6c", 00:09:31.424 "is_configured": true, 00:09:31.424 "data_offset": 2048, 00:09:31.424 "data_size": 63488 00:09:31.424 }, 00:09:31.424 { 00:09:31.424 "name": "BaseBdev2", 00:09:31.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.424 "is_configured": false, 00:09:31.424 "data_offset": 0, 00:09:31.424 "data_size": 0 00:09:31.424 }, 00:09:31.424 { 00:09:31.424 "name": "BaseBdev3", 00:09:31.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.424 "is_configured": false, 00:09:31.424 "data_offset": 0, 00:09:31.424 "data_size": 0 00:09:31.424 } 00:09:31.424 ] 00:09:31.424 }' 00:09:31.424 12:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.424 12:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.994 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:31.994 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.994 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.994 [2024-11-19 12:29:37.069267] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:31.994 BaseBdev2 00:09:31.994 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.994 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:31.994 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:31.994 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:31.994 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:31.994 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:31.994 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:31.994 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:31.994 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.994 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.994 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.995 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:31.995 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.995 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.995 [ 00:09:31.995 { 00:09:31.995 "name": "BaseBdev2", 00:09:31.995 "aliases": [ 00:09:31.995 "7fb9e5a3-cbde-46c0-95aa-ec4a7a32fa41" 00:09:31.995 ], 00:09:31.995 "product_name": "Malloc disk", 00:09:31.995 "block_size": 512, 00:09:31.995 "num_blocks": 65536, 00:09:31.995 "uuid": "7fb9e5a3-cbde-46c0-95aa-ec4a7a32fa41", 00:09:31.995 "assigned_rate_limits": { 00:09:31.995 "rw_ios_per_sec": 0, 00:09:31.995 "rw_mbytes_per_sec": 0, 00:09:31.995 "r_mbytes_per_sec": 0, 00:09:31.995 "w_mbytes_per_sec": 0 00:09:31.995 }, 00:09:31.995 "claimed": true, 00:09:31.995 "claim_type": "exclusive_write", 00:09:31.995 "zoned": false, 00:09:31.995 "supported_io_types": { 00:09:31.995 "read": true, 00:09:31.995 "write": true, 00:09:31.995 "unmap": true, 00:09:31.995 "flush": true, 00:09:31.995 "reset": true, 00:09:31.995 "nvme_admin": false, 00:09:31.995 "nvme_io": false, 00:09:31.995 "nvme_io_md": false, 00:09:31.995 "write_zeroes": true, 00:09:31.995 "zcopy": true, 00:09:31.995 "get_zone_info": false, 00:09:31.995 "zone_management": false, 00:09:31.995 "zone_append": false, 00:09:31.995 "compare": false, 00:09:31.995 "compare_and_write": false, 00:09:31.995 "abort": true, 00:09:31.995 "seek_hole": false, 00:09:31.995 "seek_data": false, 00:09:31.995 "copy": true, 00:09:31.995 "nvme_iov_md": false 00:09:31.995 }, 00:09:31.995 "memory_domains": [ 00:09:31.995 { 00:09:31.995 "dma_device_id": "system", 00:09:31.995 "dma_device_type": 1 00:09:31.995 }, 00:09:31.995 { 00:09:31.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.995 "dma_device_type": 2 00:09:31.995 } 00:09:31.995 ], 00:09:31.995 "driver_specific": {} 00:09:31.995 } 00:09:31.995 ] 00:09:31.995 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.995 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:31.995 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:31.995 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:31.995 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:31.995 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.995 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.995 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.995 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.995 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.995 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.995 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.995 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.995 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.995 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.995 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.995 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.995 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.995 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.995 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.995 "name": "Existed_Raid", 00:09:31.995 "uuid": "29331ecf-b42e-4570-87c2-d36c574bd2a8", 00:09:31.995 "strip_size_kb": 0, 00:09:31.995 "state": "configuring", 00:09:31.995 "raid_level": "raid1", 00:09:31.995 "superblock": true, 00:09:31.995 "num_base_bdevs": 3, 00:09:31.995 "num_base_bdevs_discovered": 2, 00:09:31.995 "num_base_bdevs_operational": 3, 00:09:31.995 "base_bdevs_list": [ 00:09:31.995 { 00:09:31.995 "name": "BaseBdev1", 00:09:31.995 "uuid": "b2923025-4bf2-4254-ae4b-8b0a0d3b3a6c", 00:09:31.995 "is_configured": true, 00:09:31.995 "data_offset": 2048, 00:09:31.995 "data_size": 63488 00:09:31.995 }, 00:09:31.995 { 00:09:31.995 "name": "BaseBdev2", 00:09:31.995 "uuid": "7fb9e5a3-cbde-46c0-95aa-ec4a7a32fa41", 00:09:31.995 "is_configured": true, 00:09:31.995 "data_offset": 2048, 00:09:31.995 "data_size": 63488 00:09:31.995 }, 00:09:31.995 { 00:09:31.995 "name": "BaseBdev3", 00:09:31.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.995 "is_configured": false, 00:09:31.995 "data_offset": 0, 00:09:31.995 "data_size": 0 00:09:31.995 } 00:09:31.995 ] 00:09:31.995 }' 00:09:31.995 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.995 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.566 [2024-11-19 12:29:37.567439] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:32.566 [2024-11-19 12:29:37.567655] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:32.566 [2024-11-19 12:29:37.567673] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:32.566 BaseBdev3 00:09:32.566 [2024-11-19 12:29:37.567990] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:32.566 [2024-11-19 12:29:37.568147] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:32.566 [2024-11-19 12:29:37.568158] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:32.566 [2024-11-19 12:29:37.568276] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.566 [ 00:09:32.566 { 00:09:32.566 "name": "BaseBdev3", 00:09:32.566 "aliases": [ 00:09:32.566 "807c3a19-4968-497d-ada9-4a476a964c1f" 00:09:32.566 ], 00:09:32.566 "product_name": "Malloc disk", 00:09:32.566 "block_size": 512, 00:09:32.566 "num_blocks": 65536, 00:09:32.566 "uuid": "807c3a19-4968-497d-ada9-4a476a964c1f", 00:09:32.566 "assigned_rate_limits": { 00:09:32.566 "rw_ios_per_sec": 0, 00:09:32.566 "rw_mbytes_per_sec": 0, 00:09:32.566 "r_mbytes_per_sec": 0, 00:09:32.566 "w_mbytes_per_sec": 0 00:09:32.566 }, 00:09:32.566 "claimed": true, 00:09:32.566 "claim_type": "exclusive_write", 00:09:32.566 "zoned": false, 00:09:32.566 "supported_io_types": { 00:09:32.566 "read": true, 00:09:32.566 "write": true, 00:09:32.566 "unmap": true, 00:09:32.566 "flush": true, 00:09:32.566 "reset": true, 00:09:32.566 "nvme_admin": false, 00:09:32.566 "nvme_io": false, 00:09:32.566 "nvme_io_md": false, 00:09:32.566 "write_zeroes": true, 00:09:32.566 "zcopy": true, 00:09:32.566 "get_zone_info": false, 00:09:32.566 "zone_management": false, 00:09:32.566 "zone_append": false, 00:09:32.566 "compare": false, 00:09:32.566 "compare_and_write": false, 00:09:32.566 "abort": true, 00:09:32.566 "seek_hole": false, 00:09:32.566 "seek_data": false, 00:09:32.566 "copy": true, 00:09:32.566 "nvme_iov_md": false 00:09:32.566 }, 00:09:32.566 "memory_domains": [ 00:09:32.566 { 00:09:32.566 "dma_device_id": "system", 00:09:32.566 "dma_device_type": 1 00:09:32.566 }, 00:09:32.566 { 00:09:32.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.566 "dma_device_type": 2 00:09:32.566 } 00:09:32.566 ], 00:09:32.566 "driver_specific": {} 00:09:32.566 } 00:09:32.566 ] 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.566 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.566 "name": "Existed_Raid", 00:09:32.566 "uuid": "29331ecf-b42e-4570-87c2-d36c574bd2a8", 00:09:32.566 "strip_size_kb": 0, 00:09:32.566 "state": "online", 00:09:32.566 "raid_level": "raid1", 00:09:32.566 "superblock": true, 00:09:32.566 "num_base_bdevs": 3, 00:09:32.566 "num_base_bdevs_discovered": 3, 00:09:32.566 "num_base_bdevs_operational": 3, 00:09:32.566 "base_bdevs_list": [ 00:09:32.566 { 00:09:32.566 "name": "BaseBdev1", 00:09:32.566 "uuid": "b2923025-4bf2-4254-ae4b-8b0a0d3b3a6c", 00:09:32.566 "is_configured": true, 00:09:32.566 "data_offset": 2048, 00:09:32.566 "data_size": 63488 00:09:32.566 }, 00:09:32.566 { 00:09:32.566 "name": "BaseBdev2", 00:09:32.566 "uuid": "7fb9e5a3-cbde-46c0-95aa-ec4a7a32fa41", 00:09:32.566 "is_configured": true, 00:09:32.566 "data_offset": 2048, 00:09:32.566 "data_size": 63488 00:09:32.566 }, 00:09:32.566 { 00:09:32.566 "name": "BaseBdev3", 00:09:32.566 "uuid": "807c3a19-4968-497d-ada9-4a476a964c1f", 00:09:32.567 "is_configured": true, 00:09:32.567 "data_offset": 2048, 00:09:32.567 "data_size": 63488 00:09:32.567 } 00:09:32.567 ] 00:09:32.567 }' 00:09:32.567 12:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.567 12:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.826 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:32.827 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:32.827 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:32.827 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:32.827 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:32.827 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:32.827 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:32.827 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:32.827 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.827 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.827 [2024-11-19 12:29:38.083022] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:33.087 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.087 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:33.087 "name": "Existed_Raid", 00:09:33.087 "aliases": [ 00:09:33.087 "29331ecf-b42e-4570-87c2-d36c574bd2a8" 00:09:33.087 ], 00:09:33.087 "product_name": "Raid Volume", 00:09:33.087 "block_size": 512, 00:09:33.087 "num_blocks": 63488, 00:09:33.087 "uuid": "29331ecf-b42e-4570-87c2-d36c574bd2a8", 00:09:33.087 "assigned_rate_limits": { 00:09:33.087 "rw_ios_per_sec": 0, 00:09:33.087 "rw_mbytes_per_sec": 0, 00:09:33.087 "r_mbytes_per_sec": 0, 00:09:33.087 "w_mbytes_per_sec": 0 00:09:33.087 }, 00:09:33.087 "claimed": false, 00:09:33.087 "zoned": false, 00:09:33.087 "supported_io_types": { 00:09:33.087 "read": true, 00:09:33.087 "write": true, 00:09:33.087 "unmap": false, 00:09:33.087 "flush": false, 00:09:33.087 "reset": true, 00:09:33.087 "nvme_admin": false, 00:09:33.087 "nvme_io": false, 00:09:33.087 "nvme_io_md": false, 00:09:33.087 "write_zeroes": true, 00:09:33.087 "zcopy": false, 00:09:33.087 "get_zone_info": false, 00:09:33.087 "zone_management": false, 00:09:33.087 "zone_append": false, 00:09:33.087 "compare": false, 00:09:33.087 "compare_and_write": false, 00:09:33.087 "abort": false, 00:09:33.087 "seek_hole": false, 00:09:33.087 "seek_data": false, 00:09:33.087 "copy": false, 00:09:33.087 "nvme_iov_md": false 00:09:33.087 }, 00:09:33.087 "memory_domains": [ 00:09:33.087 { 00:09:33.087 "dma_device_id": "system", 00:09:33.087 "dma_device_type": 1 00:09:33.087 }, 00:09:33.087 { 00:09:33.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.087 "dma_device_type": 2 00:09:33.087 }, 00:09:33.087 { 00:09:33.087 "dma_device_id": "system", 00:09:33.087 "dma_device_type": 1 00:09:33.087 }, 00:09:33.087 { 00:09:33.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.087 "dma_device_type": 2 00:09:33.087 }, 00:09:33.087 { 00:09:33.087 "dma_device_id": "system", 00:09:33.087 "dma_device_type": 1 00:09:33.087 }, 00:09:33.087 { 00:09:33.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.087 "dma_device_type": 2 00:09:33.087 } 00:09:33.087 ], 00:09:33.087 "driver_specific": { 00:09:33.087 "raid": { 00:09:33.087 "uuid": "29331ecf-b42e-4570-87c2-d36c574bd2a8", 00:09:33.087 "strip_size_kb": 0, 00:09:33.087 "state": "online", 00:09:33.087 "raid_level": "raid1", 00:09:33.087 "superblock": true, 00:09:33.087 "num_base_bdevs": 3, 00:09:33.087 "num_base_bdevs_discovered": 3, 00:09:33.087 "num_base_bdevs_operational": 3, 00:09:33.087 "base_bdevs_list": [ 00:09:33.087 { 00:09:33.087 "name": "BaseBdev1", 00:09:33.087 "uuid": "b2923025-4bf2-4254-ae4b-8b0a0d3b3a6c", 00:09:33.087 "is_configured": true, 00:09:33.087 "data_offset": 2048, 00:09:33.087 "data_size": 63488 00:09:33.087 }, 00:09:33.087 { 00:09:33.087 "name": "BaseBdev2", 00:09:33.087 "uuid": "7fb9e5a3-cbde-46c0-95aa-ec4a7a32fa41", 00:09:33.087 "is_configured": true, 00:09:33.087 "data_offset": 2048, 00:09:33.087 "data_size": 63488 00:09:33.087 }, 00:09:33.087 { 00:09:33.087 "name": "BaseBdev3", 00:09:33.087 "uuid": "807c3a19-4968-497d-ada9-4a476a964c1f", 00:09:33.087 "is_configured": true, 00:09:33.087 "data_offset": 2048, 00:09:33.087 "data_size": 63488 00:09:33.087 } 00:09:33.087 ] 00:09:33.087 } 00:09:33.087 } 00:09:33.087 }' 00:09:33.087 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:33.087 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:33.087 BaseBdev2 00:09:33.087 BaseBdev3' 00:09:33.087 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.087 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:33.087 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.087 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:33.087 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.087 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.087 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.087 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.087 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.087 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.087 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.087 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:33.087 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.087 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.087 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.087 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.087 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.087 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.087 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.087 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:33.087 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.087 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.087 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.347 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.347 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.347 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.347 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:33.347 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.347 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.347 [2024-11-19 12:29:38.382136] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:33.347 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.347 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:33.347 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:33.347 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:33.347 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:33.347 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:33.347 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:33.347 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.347 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.347 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.347 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.347 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:33.347 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.347 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.347 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.347 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.347 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.347 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.347 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.347 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.347 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.347 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.347 "name": "Existed_Raid", 00:09:33.347 "uuid": "29331ecf-b42e-4570-87c2-d36c574bd2a8", 00:09:33.347 "strip_size_kb": 0, 00:09:33.347 "state": "online", 00:09:33.347 "raid_level": "raid1", 00:09:33.347 "superblock": true, 00:09:33.347 "num_base_bdevs": 3, 00:09:33.347 "num_base_bdevs_discovered": 2, 00:09:33.348 "num_base_bdevs_operational": 2, 00:09:33.348 "base_bdevs_list": [ 00:09:33.348 { 00:09:33.348 "name": null, 00:09:33.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.348 "is_configured": false, 00:09:33.348 "data_offset": 0, 00:09:33.348 "data_size": 63488 00:09:33.348 }, 00:09:33.348 { 00:09:33.348 "name": "BaseBdev2", 00:09:33.348 "uuid": "7fb9e5a3-cbde-46c0-95aa-ec4a7a32fa41", 00:09:33.348 "is_configured": true, 00:09:33.348 "data_offset": 2048, 00:09:33.348 "data_size": 63488 00:09:33.348 }, 00:09:33.348 { 00:09:33.348 "name": "BaseBdev3", 00:09:33.348 "uuid": "807c3a19-4968-497d-ada9-4a476a964c1f", 00:09:33.348 "is_configured": true, 00:09:33.348 "data_offset": 2048, 00:09:33.348 "data_size": 63488 00:09:33.348 } 00:09:33.348 ] 00:09:33.348 }' 00:09:33.348 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.348 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.608 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:33.608 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:33.608 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.608 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:33.608 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.608 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.868 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.869 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:33.869 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:33.869 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:33.869 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.869 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.869 [2024-11-19 12:29:38.904360] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:33.869 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.869 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:33.869 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:33.869 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.869 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:33.869 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.869 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.869 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.869 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:33.869 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:33.869 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:33.869 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.869 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.869 [2024-11-19 12:29:38.971540] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:33.869 [2024-11-19 12:29:38.971741] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:33.869 [2024-11-19 12:29:38.983239] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:33.869 [2024-11-19 12:29:38.983360] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:33.869 [2024-11-19 12:29:38.983380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:33.869 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.869 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:33.869 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:33.869 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.869 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.869 12:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:33.869 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.869 12:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.869 BaseBdev2 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.869 [ 00:09:33.869 { 00:09:33.869 "name": "BaseBdev2", 00:09:33.869 "aliases": [ 00:09:33.869 "cd7c6df8-6f5f-46c0-8d69-175e3a1602b0" 00:09:33.869 ], 00:09:33.869 "product_name": "Malloc disk", 00:09:33.869 "block_size": 512, 00:09:33.869 "num_blocks": 65536, 00:09:33.869 "uuid": "cd7c6df8-6f5f-46c0-8d69-175e3a1602b0", 00:09:33.869 "assigned_rate_limits": { 00:09:33.869 "rw_ios_per_sec": 0, 00:09:33.869 "rw_mbytes_per_sec": 0, 00:09:33.869 "r_mbytes_per_sec": 0, 00:09:33.869 "w_mbytes_per_sec": 0 00:09:33.869 }, 00:09:33.869 "claimed": false, 00:09:33.869 "zoned": false, 00:09:33.869 "supported_io_types": { 00:09:33.869 "read": true, 00:09:33.869 "write": true, 00:09:33.869 "unmap": true, 00:09:33.869 "flush": true, 00:09:33.869 "reset": true, 00:09:33.869 "nvme_admin": false, 00:09:33.869 "nvme_io": false, 00:09:33.869 "nvme_io_md": false, 00:09:33.869 "write_zeroes": true, 00:09:33.869 "zcopy": true, 00:09:33.869 "get_zone_info": false, 00:09:33.869 "zone_management": false, 00:09:33.869 "zone_append": false, 00:09:33.869 "compare": false, 00:09:33.869 "compare_and_write": false, 00:09:33.869 "abort": true, 00:09:33.869 "seek_hole": false, 00:09:33.869 "seek_data": false, 00:09:33.869 "copy": true, 00:09:33.869 "nvme_iov_md": false 00:09:33.869 }, 00:09:33.869 "memory_domains": [ 00:09:33.869 { 00:09:33.869 "dma_device_id": "system", 00:09:33.869 "dma_device_type": 1 00:09:33.869 }, 00:09:33.869 { 00:09:33.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.869 "dma_device_type": 2 00:09:33.869 } 00:09:33.869 ], 00:09:33.869 "driver_specific": {} 00:09:33.869 } 00:09:33.869 ] 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.869 BaseBdev3 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.869 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.869 [ 00:09:33.869 { 00:09:33.869 "name": "BaseBdev3", 00:09:33.869 "aliases": [ 00:09:33.869 "aa610b03-6a1e-40f2-90ce-5b00358c026d" 00:09:33.869 ], 00:09:33.869 "product_name": "Malloc disk", 00:09:33.869 "block_size": 512, 00:09:33.869 "num_blocks": 65536, 00:09:33.869 "uuid": "aa610b03-6a1e-40f2-90ce-5b00358c026d", 00:09:33.869 "assigned_rate_limits": { 00:09:33.869 "rw_ios_per_sec": 0, 00:09:34.129 "rw_mbytes_per_sec": 0, 00:09:34.129 "r_mbytes_per_sec": 0, 00:09:34.129 "w_mbytes_per_sec": 0 00:09:34.129 }, 00:09:34.129 "claimed": false, 00:09:34.129 "zoned": false, 00:09:34.129 "supported_io_types": { 00:09:34.129 "read": true, 00:09:34.129 "write": true, 00:09:34.129 "unmap": true, 00:09:34.129 "flush": true, 00:09:34.129 "reset": true, 00:09:34.129 "nvme_admin": false, 00:09:34.129 "nvme_io": false, 00:09:34.129 "nvme_io_md": false, 00:09:34.129 "write_zeroes": true, 00:09:34.129 "zcopy": true, 00:09:34.129 "get_zone_info": false, 00:09:34.129 "zone_management": false, 00:09:34.129 "zone_append": false, 00:09:34.129 "compare": false, 00:09:34.129 "compare_and_write": false, 00:09:34.129 "abort": true, 00:09:34.129 "seek_hole": false, 00:09:34.129 "seek_data": false, 00:09:34.129 "copy": true, 00:09:34.129 "nvme_iov_md": false 00:09:34.129 }, 00:09:34.129 "memory_domains": [ 00:09:34.129 { 00:09:34.129 "dma_device_id": "system", 00:09:34.129 "dma_device_type": 1 00:09:34.129 }, 00:09:34.129 { 00:09:34.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.129 "dma_device_type": 2 00:09:34.129 } 00:09:34.129 ], 00:09:34.129 "driver_specific": {} 00:09:34.129 } 00:09:34.129 ] 00:09:34.129 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.129 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:34.129 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:34.129 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:34.129 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:34.129 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.129 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.129 [2024-11-19 12:29:39.142907] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:34.129 [2024-11-19 12:29:39.143046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:34.129 [2024-11-19 12:29:39.143091] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:34.129 [2024-11-19 12:29:39.144978] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:34.129 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.129 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:34.129 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.129 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.129 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.129 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.129 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.129 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.129 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.129 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.129 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.129 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.129 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.129 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.129 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.129 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.129 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.129 "name": "Existed_Raid", 00:09:34.129 "uuid": "764a070f-b19e-4101-851a-3634b84f2b99", 00:09:34.129 "strip_size_kb": 0, 00:09:34.129 "state": "configuring", 00:09:34.129 "raid_level": "raid1", 00:09:34.129 "superblock": true, 00:09:34.129 "num_base_bdevs": 3, 00:09:34.129 "num_base_bdevs_discovered": 2, 00:09:34.129 "num_base_bdevs_operational": 3, 00:09:34.129 "base_bdevs_list": [ 00:09:34.129 { 00:09:34.129 "name": "BaseBdev1", 00:09:34.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.129 "is_configured": false, 00:09:34.129 "data_offset": 0, 00:09:34.129 "data_size": 0 00:09:34.129 }, 00:09:34.129 { 00:09:34.129 "name": "BaseBdev2", 00:09:34.129 "uuid": "cd7c6df8-6f5f-46c0-8d69-175e3a1602b0", 00:09:34.129 "is_configured": true, 00:09:34.129 "data_offset": 2048, 00:09:34.129 "data_size": 63488 00:09:34.129 }, 00:09:34.129 { 00:09:34.129 "name": "BaseBdev3", 00:09:34.129 "uuid": "aa610b03-6a1e-40f2-90ce-5b00358c026d", 00:09:34.129 "is_configured": true, 00:09:34.129 "data_offset": 2048, 00:09:34.129 "data_size": 63488 00:09:34.129 } 00:09:34.129 ] 00:09:34.129 }' 00:09:34.129 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.129 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.388 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:34.388 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.388 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.388 [2024-11-19 12:29:39.562203] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:34.388 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.388 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:34.388 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.388 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.388 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.388 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.388 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.388 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.388 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.388 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.388 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.388 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.388 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.388 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.388 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.388 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.388 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.388 "name": "Existed_Raid", 00:09:34.388 "uuid": "764a070f-b19e-4101-851a-3634b84f2b99", 00:09:34.388 "strip_size_kb": 0, 00:09:34.388 "state": "configuring", 00:09:34.388 "raid_level": "raid1", 00:09:34.388 "superblock": true, 00:09:34.388 "num_base_bdevs": 3, 00:09:34.388 "num_base_bdevs_discovered": 1, 00:09:34.388 "num_base_bdevs_operational": 3, 00:09:34.388 "base_bdevs_list": [ 00:09:34.388 { 00:09:34.388 "name": "BaseBdev1", 00:09:34.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.388 "is_configured": false, 00:09:34.388 "data_offset": 0, 00:09:34.388 "data_size": 0 00:09:34.388 }, 00:09:34.388 { 00:09:34.388 "name": null, 00:09:34.388 "uuid": "cd7c6df8-6f5f-46c0-8d69-175e3a1602b0", 00:09:34.388 "is_configured": false, 00:09:34.388 "data_offset": 0, 00:09:34.388 "data_size": 63488 00:09:34.388 }, 00:09:34.388 { 00:09:34.388 "name": "BaseBdev3", 00:09:34.388 "uuid": "aa610b03-6a1e-40f2-90ce-5b00358c026d", 00:09:34.388 "is_configured": true, 00:09:34.388 "data_offset": 2048, 00:09:34.388 "data_size": 63488 00:09:34.388 } 00:09:34.388 ] 00:09:34.388 }' 00:09:34.388 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.388 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.957 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.957 12:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:34.957 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.957 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.957 12:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.957 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:34.957 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:34.957 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.957 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.957 BaseBdev1 00:09:34.957 [2024-11-19 12:29:40.016226] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.957 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.957 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:34.957 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:34.957 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:34.957 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:34.957 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:34.957 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:34.957 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:34.957 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.957 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.957 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.957 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:34.957 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.957 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.957 [ 00:09:34.957 { 00:09:34.957 "name": "BaseBdev1", 00:09:34.957 "aliases": [ 00:09:34.957 "2a6298e1-7cbf-40f1-ad9d-514fbc547205" 00:09:34.957 ], 00:09:34.957 "product_name": "Malloc disk", 00:09:34.957 "block_size": 512, 00:09:34.957 "num_blocks": 65536, 00:09:34.957 "uuid": "2a6298e1-7cbf-40f1-ad9d-514fbc547205", 00:09:34.957 "assigned_rate_limits": { 00:09:34.957 "rw_ios_per_sec": 0, 00:09:34.957 "rw_mbytes_per_sec": 0, 00:09:34.957 "r_mbytes_per_sec": 0, 00:09:34.957 "w_mbytes_per_sec": 0 00:09:34.957 }, 00:09:34.957 "claimed": true, 00:09:34.957 "claim_type": "exclusive_write", 00:09:34.957 "zoned": false, 00:09:34.957 "supported_io_types": { 00:09:34.957 "read": true, 00:09:34.957 "write": true, 00:09:34.957 "unmap": true, 00:09:34.957 "flush": true, 00:09:34.957 "reset": true, 00:09:34.957 "nvme_admin": false, 00:09:34.957 "nvme_io": false, 00:09:34.957 "nvme_io_md": false, 00:09:34.957 "write_zeroes": true, 00:09:34.957 "zcopy": true, 00:09:34.957 "get_zone_info": false, 00:09:34.957 "zone_management": false, 00:09:34.957 "zone_append": false, 00:09:34.957 "compare": false, 00:09:34.958 "compare_and_write": false, 00:09:34.958 "abort": true, 00:09:34.958 "seek_hole": false, 00:09:34.958 "seek_data": false, 00:09:34.958 "copy": true, 00:09:34.958 "nvme_iov_md": false 00:09:34.958 }, 00:09:34.958 "memory_domains": [ 00:09:34.958 { 00:09:34.958 "dma_device_id": "system", 00:09:34.958 "dma_device_type": 1 00:09:34.958 }, 00:09:34.958 { 00:09:34.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.958 "dma_device_type": 2 00:09:34.958 } 00:09:34.958 ], 00:09:34.958 "driver_specific": {} 00:09:34.958 } 00:09:34.958 ] 00:09:34.958 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.958 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:34.958 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:34.958 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.958 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.958 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.958 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.958 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.958 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.958 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.958 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.958 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.958 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.958 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.958 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.958 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.958 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.958 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.958 "name": "Existed_Raid", 00:09:34.958 "uuid": "764a070f-b19e-4101-851a-3634b84f2b99", 00:09:34.958 "strip_size_kb": 0, 00:09:34.958 "state": "configuring", 00:09:34.958 "raid_level": "raid1", 00:09:34.958 "superblock": true, 00:09:34.958 "num_base_bdevs": 3, 00:09:34.958 "num_base_bdevs_discovered": 2, 00:09:34.958 "num_base_bdevs_operational": 3, 00:09:34.958 "base_bdevs_list": [ 00:09:34.958 { 00:09:34.958 "name": "BaseBdev1", 00:09:34.958 "uuid": "2a6298e1-7cbf-40f1-ad9d-514fbc547205", 00:09:34.958 "is_configured": true, 00:09:34.958 "data_offset": 2048, 00:09:34.958 "data_size": 63488 00:09:34.958 }, 00:09:34.958 { 00:09:34.958 "name": null, 00:09:34.958 "uuid": "cd7c6df8-6f5f-46c0-8d69-175e3a1602b0", 00:09:34.958 "is_configured": false, 00:09:34.958 "data_offset": 0, 00:09:34.958 "data_size": 63488 00:09:34.958 }, 00:09:34.958 { 00:09:34.958 "name": "BaseBdev3", 00:09:34.958 "uuid": "aa610b03-6a1e-40f2-90ce-5b00358c026d", 00:09:34.958 "is_configured": true, 00:09:34.958 "data_offset": 2048, 00:09:34.958 "data_size": 63488 00:09:34.958 } 00:09:34.958 ] 00:09:34.958 }' 00:09:34.958 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.958 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.218 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:35.478 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.478 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.478 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.478 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.478 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:35.478 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:35.478 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.478 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.478 [2024-11-19 12:29:40.507546] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:35.478 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.478 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:35.478 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.478 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.478 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.478 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.478 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.478 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.478 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.478 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.478 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.478 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.478 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.478 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.478 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.478 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.478 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.478 "name": "Existed_Raid", 00:09:35.478 "uuid": "764a070f-b19e-4101-851a-3634b84f2b99", 00:09:35.478 "strip_size_kb": 0, 00:09:35.478 "state": "configuring", 00:09:35.478 "raid_level": "raid1", 00:09:35.478 "superblock": true, 00:09:35.478 "num_base_bdevs": 3, 00:09:35.478 "num_base_bdevs_discovered": 1, 00:09:35.478 "num_base_bdevs_operational": 3, 00:09:35.478 "base_bdevs_list": [ 00:09:35.478 { 00:09:35.478 "name": "BaseBdev1", 00:09:35.478 "uuid": "2a6298e1-7cbf-40f1-ad9d-514fbc547205", 00:09:35.478 "is_configured": true, 00:09:35.478 "data_offset": 2048, 00:09:35.478 "data_size": 63488 00:09:35.478 }, 00:09:35.478 { 00:09:35.478 "name": null, 00:09:35.478 "uuid": "cd7c6df8-6f5f-46c0-8d69-175e3a1602b0", 00:09:35.478 "is_configured": false, 00:09:35.478 "data_offset": 0, 00:09:35.478 "data_size": 63488 00:09:35.478 }, 00:09:35.478 { 00:09:35.478 "name": null, 00:09:35.478 "uuid": "aa610b03-6a1e-40f2-90ce-5b00358c026d", 00:09:35.478 "is_configured": false, 00:09:35.478 "data_offset": 0, 00:09:35.478 "data_size": 63488 00:09:35.478 } 00:09:35.478 ] 00:09:35.478 }' 00:09:35.478 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.478 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.738 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:35.738 12:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.738 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.738 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.738 12:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.999 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:35.999 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:35.999 12:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.999 12:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.999 [2024-11-19 12:29:41.010805] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:35.999 12:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.999 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:35.999 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.999 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.999 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.999 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.999 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.999 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.999 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.999 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.999 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.999 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.999 12:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.999 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.999 12:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.999 12:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.999 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.999 "name": "Existed_Raid", 00:09:35.999 "uuid": "764a070f-b19e-4101-851a-3634b84f2b99", 00:09:35.999 "strip_size_kb": 0, 00:09:35.999 "state": "configuring", 00:09:35.999 "raid_level": "raid1", 00:09:35.999 "superblock": true, 00:09:35.999 "num_base_bdevs": 3, 00:09:35.999 "num_base_bdevs_discovered": 2, 00:09:35.999 "num_base_bdevs_operational": 3, 00:09:35.999 "base_bdevs_list": [ 00:09:35.999 { 00:09:35.999 "name": "BaseBdev1", 00:09:35.999 "uuid": "2a6298e1-7cbf-40f1-ad9d-514fbc547205", 00:09:35.999 "is_configured": true, 00:09:35.999 "data_offset": 2048, 00:09:35.999 "data_size": 63488 00:09:35.999 }, 00:09:35.999 { 00:09:35.999 "name": null, 00:09:35.999 "uuid": "cd7c6df8-6f5f-46c0-8d69-175e3a1602b0", 00:09:35.999 "is_configured": false, 00:09:35.999 "data_offset": 0, 00:09:35.999 "data_size": 63488 00:09:35.999 }, 00:09:35.999 { 00:09:35.999 "name": "BaseBdev3", 00:09:35.999 "uuid": "aa610b03-6a1e-40f2-90ce-5b00358c026d", 00:09:35.999 "is_configured": true, 00:09:35.999 "data_offset": 2048, 00:09:35.999 "data_size": 63488 00:09:35.999 } 00:09:35.999 ] 00:09:35.999 }' 00:09:35.999 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.999 12:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.260 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.260 12:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.260 12:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.260 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:36.260 12:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.520 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:36.520 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:36.520 12:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.520 12:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.520 [2024-11-19 12:29:41.533890] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:36.520 12:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.520 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:36.520 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.520 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.520 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.520 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.520 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.520 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.520 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.520 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.520 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.520 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.520 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.520 12:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.520 12:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.520 12:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.520 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.520 "name": "Existed_Raid", 00:09:36.521 "uuid": "764a070f-b19e-4101-851a-3634b84f2b99", 00:09:36.521 "strip_size_kb": 0, 00:09:36.521 "state": "configuring", 00:09:36.521 "raid_level": "raid1", 00:09:36.521 "superblock": true, 00:09:36.521 "num_base_bdevs": 3, 00:09:36.521 "num_base_bdevs_discovered": 1, 00:09:36.521 "num_base_bdevs_operational": 3, 00:09:36.521 "base_bdevs_list": [ 00:09:36.521 { 00:09:36.521 "name": null, 00:09:36.521 "uuid": "2a6298e1-7cbf-40f1-ad9d-514fbc547205", 00:09:36.521 "is_configured": false, 00:09:36.521 "data_offset": 0, 00:09:36.521 "data_size": 63488 00:09:36.521 }, 00:09:36.521 { 00:09:36.521 "name": null, 00:09:36.521 "uuid": "cd7c6df8-6f5f-46c0-8d69-175e3a1602b0", 00:09:36.521 "is_configured": false, 00:09:36.521 "data_offset": 0, 00:09:36.521 "data_size": 63488 00:09:36.521 }, 00:09:36.521 { 00:09:36.521 "name": "BaseBdev3", 00:09:36.521 "uuid": "aa610b03-6a1e-40f2-90ce-5b00358c026d", 00:09:36.521 "is_configured": true, 00:09:36.521 "data_offset": 2048, 00:09:36.521 "data_size": 63488 00:09:36.521 } 00:09:36.521 ] 00:09:36.521 }' 00:09:36.521 12:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.521 12:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.781 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.781 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.781 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:36.781 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.781 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.042 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:37.042 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:37.042 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.042 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.042 [2024-11-19 12:29:42.059404] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:37.042 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.042 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:37.042 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.042 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.042 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.042 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.042 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.042 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.042 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.042 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.042 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.042 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.042 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.042 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.042 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.042 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.042 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.042 "name": "Existed_Raid", 00:09:37.042 "uuid": "764a070f-b19e-4101-851a-3634b84f2b99", 00:09:37.042 "strip_size_kb": 0, 00:09:37.042 "state": "configuring", 00:09:37.042 "raid_level": "raid1", 00:09:37.042 "superblock": true, 00:09:37.042 "num_base_bdevs": 3, 00:09:37.042 "num_base_bdevs_discovered": 2, 00:09:37.042 "num_base_bdevs_operational": 3, 00:09:37.042 "base_bdevs_list": [ 00:09:37.042 { 00:09:37.042 "name": null, 00:09:37.042 "uuid": "2a6298e1-7cbf-40f1-ad9d-514fbc547205", 00:09:37.042 "is_configured": false, 00:09:37.042 "data_offset": 0, 00:09:37.042 "data_size": 63488 00:09:37.042 }, 00:09:37.042 { 00:09:37.042 "name": "BaseBdev2", 00:09:37.042 "uuid": "cd7c6df8-6f5f-46c0-8d69-175e3a1602b0", 00:09:37.042 "is_configured": true, 00:09:37.042 "data_offset": 2048, 00:09:37.042 "data_size": 63488 00:09:37.042 }, 00:09:37.042 { 00:09:37.042 "name": "BaseBdev3", 00:09:37.042 "uuid": "aa610b03-6a1e-40f2-90ce-5b00358c026d", 00:09:37.042 "is_configured": true, 00:09:37.042 "data_offset": 2048, 00:09:37.042 "data_size": 63488 00:09:37.042 } 00:09:37.042 ] 00:09:37.042 }' 00:09:37.042 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.042 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.303 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.303 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.303 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.303 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:37.303 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.563 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:37.563 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.563 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.563 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:37.563 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.563 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.563 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2a6298e1-7cbf-40f1-ad9d-514fbc547205 00:09:37.563 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.563 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.564 [2024-11-19 12:29:42.625150] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:37.564 [2024-11-19 12:29:42.625323] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:37.564 [2024-11-19 12:29:42.625336] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:37.564 [2024-11-19 12:29:42.625603] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:37.564 [2024-11-19 12:29:42.625734] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:37.564 [2024-11-19 12:29:42.625763] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:37.564 NewBaseBdev 00:09:37.564 [2024-11-19 12:29:42.625861] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.564 [ 00:09:37.564 { 00:09:37.564 "name": "NewBaseBdev", 00:09:37.564 "aliases": [ 00:09:37.564 "2a6298e1-7cbf-40f1-ad9d-514fbc547205" 00:09:37.564 ], 00:09:37.564 "product_name": "Malloc disk", 00:09:37.564 "block_size": 512, 00:09:37.564 "num_blocks": 65536, 00:09:37.564 "uuid": "2a6298e1-7cbf-40f1-ad9d-514fbc547205", 00:09:37.564 "assigned_rate_limits": { 00:09:37.564 "rw_ios_per_sec": 0, 00:09:37.564 "rw_mbytes_per_sec": 0, 00:09:37.564 "r_mbytes_per_sec": 0, 00:09:37.564 "w_mbytes_per_sec": 0 00:09:37.564 }, 00:09:37.564 "claimed": true, 00:09:37.564 "claim_type": "exclusive_write", 00:09:37.564 "zoned": false, 00:09:37.564 "supported_io_types": { 00:09:37.564 "read": true, 00:09:37.564 "write": true, 00:09:37.564 "unmap": true, 00:09:37.564 "flush": true, 00:09:37.564 "reset": true, 00:09:37.564 "nvme_admin": false, 00:09:37.564 "nvme_io": false, 00:09:37.564 "nvme_io_md": false, 00:09:37.564 "write_zeroes": true, 00:09:37.564 "zcopy": true, 00:09:37.564 "get_zone_info": false, 00:09:37.564 "zone_management": false, 00:09:37.564 "zone_append": false, 00:09:37.564 "compare": false, 00:09:37.564 "compare_and_write": false, 00:09:37.564 "abort": true, 00:09:37.564 "seek_hole": false, 00:09:37.564 "seek_data": false, 00:09:37.564 "copy": true, 00:09:37.564 "nvme_iov_md": false 00:09:37.564 }, 00:09:37.564 "memory_domains": [ 00:09:37.564 { 00:09:37.564 "dma_device_id": "system", 00:09:37.564 "dma_device_type": 1 00:09:37.564 }, 00:09:37.564 { 00:09:37.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.564 "dma_device_type": 2 00:09:37.564 } 00:09:37.564 ], 00:09:37.564 "driver_specific": {} 00:09:37.564 } 00:09:37.564 ] 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.564 "name": "Existed_Raid", 00:09:37.564 "uuid": "764a070f-b19e-4101-851a-3634b84f2b99", 00:09:37.564 "strip_size_kb": 0, 00:09:37.564 "state": "online", 00:09:37.564 "raid_level": "raid1", 00:09:37.564 "superblock": true, 00:09:37.564 "num_base_bdevs": 3, 00:09:37.564 "num_base_bdevs_discovered": 3, 00:09:37.564 "num_base_bdevs_operational": 3, 00:09:37.564 "base_bdevs_list": [ 00:09:37.564 { 00:09:37.564 "name": "NewBaseBdev", 00:09:37.564 "uuid": "2a6298e1-7cbf-40f1-ad9d-514fbc547205", 00:09:37.564 "is_configured": true, 00:09:37.564 "data_offset": 2048, 00:09:37.564 "data_size": 63488 00:09:37.564 }, 00:09:37.564 { 00:09:37.564 "name": "BaseBdev2", 00:09:37.564 "uuid": "cd7c6df8-6f5f-46c0-8d69-175e3a1602b0", 00:09:37.564 "is_configured": true, 00:09:37.564 "data_offset": 2048, 00:09:37.564 "data_size": 63488 00:09:37.564 }, 00:09:37.564 { 00:09:37.564 "name": "BaseBdev3", 00:09:37.564 "uuid": "aa610b03-6a1e-40f2-90ce-5b00358c026d", 00:09:37.564 "is_configured": true, 00:09:37.564 "data_offset": 2048, 00:09:37.564 "data_size": 63488 00:09:37.564 } 00:09:37.564 ] 00:09:37.564 }' 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.564 12:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.138 12:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:38.138 12:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:38.138 12:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:38.138 12:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:38.138 12:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:38.138 12:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:38.138 12:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:38.138 12:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:38.138 12:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.138 12:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.138 [2024-11-19 12:29:43.112691] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:38.138 12:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.138 12:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:38.138 "name": "Existed_Raid", 00:09:38.138 "aliases": [ 00:09:38.138 "764a070f-b19e-4101-851a-3634b84f2b99" 00:09:38.138 ], 00:09:38.138 "product_name": "Raid Volume", 00:09:38.138 "block_size": 512, 00:09:38.138 "num_blocks": 63488, 00:09:38.138 "uuid": "764a070f-b19e-4101-851a-3634b84f2b99", 00:09:38.138 "assigned_rate_limits": { 00:09:38.138 "rw_ios_per_sec": 0, 00:09:38.138 "rw_mbytes_per_sec": 0, 00:09:38.138 "r_mbytes_per_sec": 0, 00:09:38.138 "w_mbytes_per_sec": 0 00:09:38.138 }, 00:09:38.138 "claimed": false, 00:09:38.138 "zoned": false, 00:09:38.138 "supported_io_types": { 00:09:38.138 "read": true, 00:09:38.138 "write": true, 00:09:38.138 "unmap": false, 00:09:38.138 "flush": false, 00:09:38.138 "reset": true, 00:09:38.138 "nvme_admin": false, 00:09:38.138 "nvme_io": false, 00:09:38.138 "nvme_io_md": false, 00:09:38.138 "write_zeroes": true, 00:09:38.138 "zcopy": false, 00:09:38.138 "get_zone_info": false, 00:09:38.138 "zone_management": false, 00:09:38.138 "zone_append": false, 00:09:38.138 "compare": false, 00:09:38.138 "compare_and_write": false, 00:09:38.138 "abort": false, 00:09:38.138 "seek_hole": false, 00:09:38.138 "seek_data": false, 00:09:38.138 "copy": false, 00:09:38.138 "nvme_iov_md": false 00:09:38.138 }, 00:09:38.138 "memory_domains": [ 00:09:38.138 { 00:09:38.138 "dma_device_id": "system", 00:09:38.138 "dma_device_type": 1 00:09:38.138 }, 00:09:38.138 { 00:09:38.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.138 "dma_device_type": 2 00:09:38.139 }, 00:09:38.139 { 00:09:38.139 "dma_device_id": "system", 00:09:38.139 "dma_device_type": 1 00:09:38.139 }, 00:09:38.139 { 00:09:38.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.139 "dma_device_type": 2 00:09:38.139 }, 00:09:38.139 { 00:09:38.139 "dma_device_id": "system", 00:09:38.139 "dma_device_type": 1 00:09:38.139 }, 00:09:38.139 { 00:09:38.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.139 "dma_device_type": 2 00:09:38.139 } 00:09:38.139 ], 00:09:38.139 "driver_specific": { 00:09:38.139 "raid": { 00:09:38.139 "uuid": "764a070f-b19e-4101-851a-3634b84f2b99", 00:09:38.139 "strip_size_kb": 0, 00:09:38.139 "state": "online", 00:09:38.139 "raid_level": "raid1", 00:09:38.139 "superblock": true, 00:09:38.139 "num_base_bdevs": 3, 00:09:38.139 "num_base_bdevs_discovered": 3, 00:09:38.139 "num_base_bdevs_operational": 3, 00:09:38.139 "base_bdevs_list": [ 00:09:38.139 { 00:09:38.139 "name": "NewBaseBdev", 00:09:38.139 "uuid": "2a6298e1-7cbf-40f1-ad9d-514fbc547205", 00:09:38.139 "is_configured": true, 00:09:38.139 "data_offset": 2048, 00:09:38.139 "data_size": 63488 00:09:38.139 }, 00:09:38.139 { 00:09:38.139 "name": "BaseBdev2", 00:09:38.139 "uuid": "cd7c6df8-6f5f-46c0-8d69-175e3a1602b0", 00:09:38.139 "is_configured": true, 00:09:38.139 "data_offset": 2048, 00:09:38.139 "data_size": 63488 00:09:38.139 }, 00:09:38.139 { 00:09:38.139 "name": "BaseBdev3", 00:09:38.139 "uuid": "aa610b03-6a1e-40f2-90ce-5b00358c026d", 00:09:38.139 "is_configured": true, 00:09:38.139 "data_offset": 2048, 00:09:38.139 "data_size": 63488 00:09:38.139 } 00:09:38.139 ] 00:09:38.139 } 00:09:38.139 } 00:09:38.139 }' 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:38.139 BaseBdev2 00:09:38.139 BaseBdev3' 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.139 [2024-11-19 12:29:43.379920] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:38.139 [2024-11-19 12:29:43.379960] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:38.139 [2024-11-19 12:29:43.380036] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:38.139 [2024-11-19 12:29:43.380281] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:38.139 [2024-11-19 12:29:43.380291] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 79233 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 79233 ']' 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 79233 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:38.139 12:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:38.400 12:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79233 00:09:38.400 12:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:38.400 killing process with pid 79233 00:09:38.401 12:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:38.401 12:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79233' 00:09:38.401 12:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 79233 00:09:38.401 [2024-11-19 12:29:43.419077] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:38.401 12:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 79233 00:09:38.401 [2024-11-19 12:29:43.449795] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:38.662 ************************************ 00:09:38.662 END TEST raid_state_function_test_sb 00:09:38.662 ************************************ 00:09:38.662 12:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:38.662 00:09:38.662 real 0m9.043s 00:09:38.662 user 0m15.423s 00:09:38.662 sys 0m1.872s 00:09:38.662 12:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.662 12:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.662 12:29:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:38.662 12:29:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:38.662 12:29:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.662 12:29:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:38.662 ************************************ 00:09:38.662 START TEST raid_superblock_test 00:09:38.662 ************************************ 00:09:38.662 12:29:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:09:38.662 12:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:38.662 12:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:38.662 12:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:38.662 12:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:38.662 12:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:38.662 12:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:38.662 12:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:38.662 12:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:38.662 12:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:38.662 12:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:38.662 12:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:38.662 12:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:38.662 12:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:38.662 12:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:38.662 12:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:38.662 12:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=79842 00:09:38.662 12:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:38.662 12:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 79842 00:09:38.662 12:29:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 79842 ']' 00:09:38.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.662 12:29:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.662 12:29:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:38.662 12:29:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.662 12:29:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:38.662 12:29:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.662 [2024-11-19 12:29:43.871592] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:38.662 [2024-11-19 12:29:43.871835] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79842 ] 00:09:38.924 [2024-11-19 12:29:44.037428] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.924 [2024-11-19 12:29:44.085177] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.924 [2024-11-19 12:29:44.126870] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.924 [2024-11-19 12:29:44.126992] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.495 12:29:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:39.495 12:29:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:39.495 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:39.495 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:39.495 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:39.495 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:39.495 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:39.495 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:39.496 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:39.496 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:39.496 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:39.496 12:29:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.496 12:29:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.757 malloc1 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.757 [2024-11-19 12:29:44.764773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:39.757 [2024-11-19 12:29:44.764955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.757 [2024-11-19 12:29:44.764992] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:39.757 [2024-11-19 12:29:44.765025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.757 [2024-11-19 12:29:44.767235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.757 [2024-11-19 12:29:44.767308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:39.757 pt1 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.757 malloc2 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.757 [2024-11-19 12:29:44.807313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:39.757 [2024-11-19 12:29:44.807453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.757 [2024-11-19 12:29:44.807472] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:39.757 [2024-11-19 12:29:44.807483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.757 [2024-11-19 12:29:44.809573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.757 [2024-11-19 12:29:44.809612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:39.757 pt2 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.757 malloc3 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.757 [2024-11-19 12:29:44.835911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:39.757 [2024-11-19 12:29:44.836035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.757 [2024-11-19 12:29:44.836068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:39.757 [2024-11-19 12:29:44.836098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.757 [2024-11-19 12:29:44.838133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.757 [2024-11-19 12:29:44.838203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:39.757 pt3 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.757 [2024-11-19 12:29:44.847930] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:39.757 [2024-11-19 12:29:44.849797] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:39.757 [2024-11-19 12:29:44.849893] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:39.757 [2024-11-19 12:29:44.850063] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:39.757 [2024-11-19 12:29:44.850129] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:39.757 [2024-11-19 12:29:44.850401] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:39.757 [2024-11-19 12:29:44.850573] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:39.757 [2024-11-19 12:29:44.850617] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:09:39.757 [2024-11-19 12:29:44.850790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.757 12:29:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.758 12:29:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.758 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:39.758 12:29:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.758 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.758 "name": "raid_bdev1", 00:09:39.758 "uuid": "5a4a80e3-c4ba-4567-8391-6c9328af6be5", 00:09:39.758 "strip_size_kb": 0, 00:09:39.758 "state": "online", 00:09:39.758 "raid_level": "raid1", 00:09:39.758 "superblock": true, 00:09:39.758 "num_base_bdevs": 3, 00:09:39.758 "num_base_bdevs_discovered": 3, 00:09:39.758 "num_base_bdevs_operational": 3, 00:09:39.758 "base_bdevs_list": [ 00:09:39.758 { 00:09:39.758 "name": "pt1", 00:09:39.758 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:39.758 "is_configured": true, 00:09:39.758 "data_offset": 2048, 00:09:39.758 "data_size": 63488 00:09:39.758 }, 00:09:39.758 { 00:09:39.758 "name": "pt2", 00:09:39.758 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:39.758 "is_configured": true, 00:09:39.758 "data_offset": 2048, 00:09:39.758 "data_size": 63488 00:09:39.758 }, 00:09:39.758 { 00:09:39.758 "name": "pt3", 00:09:39.758 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:39.758 "is_configured": true, 00:09:39.758 "data_offset": 2048, 00:09:39.758 "data_size": 63488 00:09:39.758 } 00:09:39.758 ] 00:09:39.758 }' 00:09:39.758 12:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.758 12:29:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:40.326 [2024-11-19 12:29:45.335389] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:40.326 "name": "raid_bdev1", 00:09:40.326 "aliases": [ 00:09:40.326 "5a4a80e3-c4ba-4567-8391-6c9328af6be5" 00:09:40.326 ], 00:09:40.326 "product_name": "Raid Volume", 00:09:40.326 "block_size": 512, 00:09:40.326 "num_blocks": 63488, 00:09:40.326 "uuid": "5a4a80e3-c4ba-4567-8391-6c9328af6be5", 00:09:40.326 "assigned_rate_limits": { 00:09:40.326 "rw_ios_per_sec": 0, 00:09:40.326 "rw_mbytes_per_sec": 0, 00:09:40.326 "r_mbytes_per_sec": 0, 00:09:40.326 "w_mbytes_per_sec": 0 00:09:40.326 }, 00:09:40.326 "claimed": false, 00:09:40.326 "zoned": false, 00:09:40.326 "supported_io_types": { 00:09:40.326 "read": true, 00:09:40.326 "write": true, 00:09:40.326 "unmap": false, 00:09:40.326 "flush": false, 00:09:40.326 "reset": true, 00:09:40.326 "nvme_admin": false, 00:09:40.326 "nvme_io": false, 00:09:40.326 "nvme_io_md": false, 00:09:40.326 "write_zeroes": true, 00:09:40.326 "zcopy": false, 00:09:40.326 "get_zone_info": false, 00:09:40.326 "zone_management": false, 00:09:40.326 "zone_append": false, 00:09:40.326 "compare": false, 00:09:40.326 "compare_and_write": false, 00:09:40.326 "abort": false, 00:09:40.326 "seek_hole": false, 00:09:40.326 "seek_data": false, 00:09:40.326 "copy": false, 00:09:40.326 "nvme_iov_md": false 00:09:40.326 }, 00:09:40.326 "memory_domains": [ 00:09:40.326 { 00:09:40.326 "dma_device_id": "system", 00:09:40.326 "dma_device_type": 1 00:09:40.326 }, 00:09:40.326 { 00:09:40.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.326 "dma_device_type": 2 00:09:40.326 }, 00:09:40.326 { 00:09:40.326 "dma_device_id": "system", 00:09:40.326 "dma_device_type": 1 00:09:40.326 }, 00:09:40.326 { 00:09:40.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.326 "dma_device_type": 2 00:09:40.326 }, 00:09:40.326 { 00:09:40.326 "dma_device_id": "system", 00:09:40.326 "dma_device_type": 1 00:09:40.326 }, 00:09:40.326 { 00:09:40.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.326 "dma_device_type": 2 00:09:40.326 } 00:09:40.326 ], 00:09:40.326 "driver_specific": { 00:09:40.326 "raid": { 00:09:40.326 "uuid": "5a4a80e3-c4ba-4567-8391-6c9328af6be5", 00:09:40.326 "strip_size_kb": 0, 00:09:40.326 "state": "online", 00:09:40.326 "raid_level": "raid1", 00:09:40.326 "superblock": true, 00:09:40.326 "num_base_bdevs": 3, 00:09:40.326 "num_base_bdevs_discovered": 3, 00:09:40.326 "num_base_bdevs_operational": 3, 00:09:40.326 "base_bdevs_list": [ 00:09:40.326 { 00:09:40.326 "name": "pt1", 00:09:40.326 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:40.326 "is_configured": true, 00:09:40.326 "data_offset": 2048, 00:09:40.326 "data_size": 63488 00:09:40.326 }, 00:09:40.326 { 00:09:40.326 "name": "pt2", 00:09:40.326 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:40.326 "is_configured": true, 00:09:40.326 "data_offset": 2048, 00:09:40.326 "data_size": 63488 00:09:40.326 }, 00:09:40.326 { 00:09:40.326 "name": "pt3", 00:09:40.326 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:40.326 "is_configured": true, 00:09:40.326 "data_offset": 2048, 00:09:40.326 "data_size": 63488 00:09:40.326 } 00:09:40.326 ] 00:09:40.326 } 00:09:40.326 } 00:09:40.326 }' 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:40.326 pt2 00:09:40.326 pt3' 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:40.326 [2024-11-19 12:29:45.567023] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.326 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.587 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5a4a80e3-c4ba-4567-8391-6c9328af6be5 00:09:40.587 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5a4a80e3-c4ba-4567-8391-6c9328af6be5 ']' 00:09:40.587 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:40.587 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.587 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.587 [2024-11-19 12:29:45.614669] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:40.587 [2024-11-19 12:29:45.614695] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.587 [2024-11-19 12:29:45.614795] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.587 [2024-11-19 12:29:45.614871] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.587 [2024-11-19 12:29:45.614885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:09:40.587 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.587 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.587 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.587 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:40.587 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.587 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.587 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:40.587 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:40.587 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:40.587 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:40.587 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.587 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.587 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.587 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.588 [2024-11-19 12:29:45.770412] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:40.588 [2024-11-19 12:29:45.772283] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:40.588 [2024-11-19 12:29:45.772383] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:40.588 [2024-11-19 12:29:45.772437] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:40.588 [2024-11-19 12:29:45.772494] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:40.588 [2024-11-19 12:29:45.772515] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:40.588 [2024-11-19 12:29:45.772528] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:40.588 [2024-11-19 12:29:45.772544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:09:40.588 request: 00:09:40.588 { 00:09:40.588 "name": "raid_bdev1", 00:09:40.588 "raid_level": "raid1", 00:09:40.588 "base_bdevs": [ 00:09:40.588 "malloc1", 00:09:40.588 "malloc2", 00:09:40.588 "malloc3" 00:09:40.588 ], 00:09:40.588 "superblock": false, 00:09:40.588 "method": "bdev_raid_create", 00:09:40.588 "req_id": 1 00:09:40.588 } 00:09:40.588 Got JSON-RPC error response 00:09:40.588 response: 00:09:40.588 { 00:09:40.588 "code": -17, 00:09:40.588 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:40.588 } 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.588 [2024-11-19 12:29:45.838264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:40.588 [2024-11-19 12:29:45.838364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.588 [2024-11-19 12:29:45.838397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:40.588 [2024-11-19 12:29:45.838427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.588 [2024-11-19 12:29:45.840508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.588 [2024-11-19 12:29:45.840577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:40.588 [2024-11-19 12:29:45.840656] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:40.588 [2024-11-19 12:29:45.840705] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:40.588 pt1 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.588 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.848 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.848 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.849 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.849 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.849 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.849 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.849 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.849 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.849 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.849 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.849 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.849 "name": "raid_bdev1", 00:09:40.849 "uuid": "5a4a80e3-c4ba-4567-8391-6c9328af6be5", 00:09:40.849 "strip_size_kb": 0, 00:09:40.849 "state": "configuring", 00:09:40.849 "raid_level": "raid1", 00:09:40.849 "superblock": true, 00:09:40.849 "num_base_bdevs": 3, 00:09:40.849 "num_base_bdevs_discovered": 1, 00:09:40.849 "num_base_bdevs_operational": 3, 00:09:40.849 "base_bdevs_list": [ 00:09:40.849 { 00:09:40.849 "name": "pt1", 00:09:40.849 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:40.849 "is_configured": true, 00:09:40.849 "data_offset": 2048, 00:09:40.849 "data_size": 63488 00:09:40.849 }, 00:09:40.849 { 00:09:40.849 "name": null, 00:09:40.849 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:40.849 "is_configured": false, 00:09:40.849 "data_offset": 2048, 00:09:40.849 "data_size": 63488 00:09:40.849 }, 00:09:40.849 { 00:09:40.849 "name": null, 00:09:40.849 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:40.849 "is_configured": false, 00:09:40.849 "data_offset": 2048, 00:09:40.849 "data_size": 63488 00:09:40.849 } 00:09:40.849 ] 00:09:40.849 }' 00:09:40.849 12:29:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.849 12:29:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.109 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:41.109 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:41.109 12:29:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.109 12:29:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.109 [2024-11-19 12:29:46.273546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:41.109 [2024-11-19 12:29:46.273610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.109 [2024-11-19 12:29:46.273629] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:41.109 [2024-11-19 12:29:46.273641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.109 [2024-11-19 12:29:46.274042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.109 [2024-11-19 12:29:46.274070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:41.109 [2024-11-19 12:29:46.274139] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:41.109 [2024-11-19 12:29:46.274162] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:41.109 pt2 00:09:41.109 12:29:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.109 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:41.109 12:29:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.109 12:29:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.109 [2024-11-19 12:29:46.281531] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:41.109 12:29:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.109 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:41.109 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:41.109 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.109 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.109 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.109 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.109 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.109 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.109 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.109 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.109 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.109 12:29:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.109 12:29:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.109 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.109 12:29:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.109 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.109 "name": "raid_bdev1", 00:09:41.109 "uuid": "5a4a80e3-c4ba-4567-8391-6c9328af6be5", 00:09:41.109 "strip_size_kb": 0, 00:09:41.109 "state": "configuring", 00:09:41.109 "raid_level": "raid1", 00:09:41.109 "superblock": true, 00:09:41.109 "num_base_bdevs": 3, 00:09:41.109 "num_base_bdevs_discovered": 1, 00:09:41.109 "num_base_bdevs_operational": 3, 00:09:41.109 "base_bdevs_list": [ 00:09:41.109 { 00:09:41.109 "name": "pt1", 00:09:41.109 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:41.109 "is_configured": true, 00:09:41.109 "data_offset": 2048, 00:09:41.109 "data_size": 63488 00:09:41.109 }, 00:09:41.109 { 00:09:41.109 "name": null, 00:09:41.109 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:41.109 "is_configured": false, 00:09:41.109 "data_offset": 0, 00:09:41.109 "data_size": 63488 00:09:41.109 }, 00:09:41.109 { 00:09:41.109 "name": null, 00:09:41.109 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:41.109 "is_configured": false, 00:09:41.110 "data_offset": 2048, 00:09:41.110 "data_size": 63488 00:09:41.110 } 00:09:41.110 ] 00:09:41.110 }' 00:09:41.110 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.110 12:29:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.680 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:41.680 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:41.680 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:41.680 12:29:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.680 12:29:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.680 [2024-11-19 12:29:46.752732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:41.680 [2024-11-19 12:29:46.752877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.680 [2024-11-19 12:29:46.752913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:41.680 [2024-11-19 12:29:46.752941] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.680 [2024-11-19 12:29:46.753340] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.680 [2024-11-19 12:29:46.753399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:41.680 [2024-11-19 12:29:46.753503] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:41.680 [2024-11-19 12:29:46.753557] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:41.680 pt2 00:09:41.680 12:29:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.680 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:41.680 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:41.680 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:41.680 12:29:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.680 12:29:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.680 [2024-11-19 12:29:46.764678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:41.680 [2024-11-19 12:29:46.764721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.680 [2024-11-19 12:29:46.764738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:41.680 [2024-11-19 12:29:46.764759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.680 [2024-11-19 12:29:46.765071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.680 [2024-11-19 12:29:46.765103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:41.680 [2024-11-19 12:29:46.765160] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:41.680 [2024-11-19 12:29:46.765175] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:41.680 [2024-11-19 12:29:46.765266] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:41.680 [2024-11-19 12:29:46.765278] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:41.680 [2024-11-19 12:29:46.765488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:41.680 [2024-11-19 12:29:46.765600] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:41.680 [2024-11-19 12:29:46.765611] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:41.680 [2024-11-19 12:29:46.765704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.680 pt3 00:09:41.680 12:29:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.680 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:41.680 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:41.680 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:41.680 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:41.680 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.680 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.680 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.680 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.680 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.680 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.680 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.680 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.680 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.680 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.680 12:29:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.680 12:29:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.680 12:29:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.680 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.680 "name": "raid_bdev1", 00:09:41.680 "uuid": "5a4a80e3-c4ba-4567-8391-6c9328af6be5", 00:09:41.680 "strip_size_kb": 0, 00:09:41.680 "state": "online", 00:09:41.680 "raid_level": "raid1", 00:09:41.680 "superblock": true, 00:09:41.680 "num_base_bdevs": 3, 00:09:41.680 "num_base_bdevs_discovered": 3, 00:09:41.680 "num_base_bdevs_operational": 3, 00:09:41.680 "base_bdevs_list": [ 00:09:41.680 { 00:09:41.680 "name": "pt1", 00:09:41.680 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:41.680 "is_configured": true, 00:09:41.680 "data_offset": 2048, 00:09:41.680 "data_size": 63488 00:09:41.680 }, 00:09:41.680 { 00:09:41.680 "name": "pt2", 00:09:41.680 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:41.680 "is_configured": true, 00:09:41.680 "data_offset": 2048, 00:09:41.680 "data_size": 63488 00:09:41.680 }, 00:09:41.680 { 00:09:41.680 "name": "pt3", 00:09:41.680 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:41.680 "is_configured": true, 00:09:41.680 "data_offset": 2048, 00:09:41.680 "data_size": 63488 00:09:41.680 } 00:09:41.680 ] 00:09:41.680 }' 00:09:41.680 12:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.680 12:29:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.251 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:42.251 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:42.251 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:42.251 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:42.251 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:42.251 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:42.251 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:42.251 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:42.251 12:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.251 12:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.251 [2024-11-19 12:29:47.220146] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.251 12:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.251 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:42.251 "name": "raid_bdev1", 00:09:42.251 "aliases": [ 00:09:42.251 "5a4a80e3-c4ba-4567-8391-6c9328af6be5" 00:09:42.251 ], 00:09:42.251 "product_name": "Raid Volume", 00:09:42.251 "block_size": 512, 00:09:42.251 "num_blocks": 63488, 00:09:42.252 "uuid": "5a4a80e3-c4ba-4567-8391-6c9328af6be5", 00:09:42.252 "assigned_rate_limits": { 00:09:42.252 "rw_ios_per_sec": 0, 00:09:42.252 "rw_mbytes_per_sec": 0, 00:09:42.252 "r_mbytes_per_sec": 0, 00:09:42.252 "w_mbytes_per_sec": 0 00:09:42.252 }, 00:09:42.252 "claimed": false, 00:09:42.252 "zoned": false, 00:09:42.252 "supported_io_types": { 00:09:42.252 "read": true, 00:09:42.252 "write": true, 00:09:42.252 "unmap": false, 00:09:42.252 "flush": false, 00:09:42.252 "reset": true, 00:09:42.252 "nvme_admin": false, 00:09:42.252 "nvme_io": false, 00:09:42.252 "nvme_io_md": false, 00:09:42.252 "write_zeroes": true, 00:09:42.252 "zcopy": false, 00:09:42.252 "get_zone_info": false, 00:09:42.252 "zone_management": false, 00:09:42.252 "zone_append": false, 00:09:42.252 "compare": false, 00:09:42.252 "compare_and_write": false, 00:09:42.252 "abort": false, 00:09:42.252 "seek_hole": false, 00:09:42.252 "seek_data": false, 00:09:42.252 "copy": false, 00:09:42.252 "nvme_iov_md": false 00:09:42.252 }, 00:09:42.252 "memory_domains": [ 00:09:42.252 { 00:09:42.252 "dma_device_id": "system", 00:09:42.252 "dma_device_type": 1 00:09:42.252 }, 00:09:42.252 { 00:09:42.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.252 "dma_device_type": 2 00:09:42.252 }, 00:09:42.252 { 00:09:42.252 "dma_device_id": "system", 00:09:42.252 "dma_device_type": 1 00:09:42.252 }, 00:09:42.252 { 00:09:42.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.252 "dma_device_type": 2 00:09:42.252 }, 00:09:42.252 { 00:09:42.252 "dma_device_id": "system", 00:09:42.252 "dma_device_type": 1 00:09:42.252 }, 00:09:42.252 { 00:09:42.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.252 "dma_device_type": 2 00:09:42.252 } 00:09:42.252 ], 00:09:42.252 "driver_specific": { 00:09:42.252 "raid": { 00:09:42.252 "uuid": "5a4a80e3-c4ba-4567-8391-6c9328af6be5", 00:09:42.252 "strip_size_kb": 0, 00:09:42.252 "state": "online", 00:09:42.252 "raid_level": "raid1", 00:09:42.252 "superblock": true, 00:09:42.252 "num_base_bdevs": 3, 00:09:42.252 "num_base_bdevs_discovered": 3, 00:09:42.252 "num_base_bdevs_operational": 3, 00:09:42.252 "base_bdevs_list": [ 00:09:42.252 { 00:09:42.252 "name": "pt1", 00:09:42.252 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:42.252 "is_configured": true, 00:09:42.252 "data_offset": 2048, 00:09:42.252 "data_size": 63488 00:09:42.252 }, 00:09:42.252 { 00:09:42.252 "name": "pt2", 00:09:42.252 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:42.252 "is_configured": true, 00:09:42.252 "data_offset": 2048, 00:09:42.252 "data_size": 63488 00:09:42.252 }, 00:09:42.252 { 00:09:42.252 "name": "pt3", 00:09:42.252 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:42.252 "is_configured": true, 00:09:42.252 "data_offset": 2048, 00:09:42.252 "data_size": 63488 00:09:42.252 } 00:09:42.252 ] 00:09:42.252 } 00:09:42.252 } 00:09:42.252 }' 00:09:42.252 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:42.252 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:42.252 pt2 00:09:42.252 pt3' 00:09:42.252 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.252 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:42.252 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.252 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:42.252 12:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.252 12:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.252 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.252 12:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.252 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.252 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.252 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.252 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.252 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:42.252 12:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.252 12:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.252 12:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.252 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.252 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.252 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.252 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:42.252 12:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.252 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.252 12:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.252 12:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.252 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.252 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.252 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:42.252 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:42.252 12:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.252 12:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.512 [2024-11-19 12:29:47.511634] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.512 12:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.512 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5a4a80e3-c4ba-4567-8391-6c9328af6be5 '!=' 5a4a80e3-c4ba-4567-8391-6c9328af6be5 ']' 00:09:42.512 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:42.512 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:42.513 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:42.513 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:42.513 12:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.513 12:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.513 [2024-11-19 12:29:47.559339] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:42.513 12:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.513 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:42.513 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:42.513 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:42.513 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.513 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.513 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:42.513 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.513 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.513 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.513 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.513 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.513 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.513 12:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.513 12:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.513 12:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.513 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.513 "name": "raid_bdev1", 00:09:42.513 "uuid": "5a4a80e3-c4ba-4567-8391-6c9328af6be5", 00:09:42.513 "strip_size_kb": 0, 00:09:42.513 "state": "online", 00:09:42.513 "raid_level": "raid1", 00:09:42.513 "superblock": true, 00:09:42.513 "num_base_bdevs": 3, 00:09:42.513 "num_base_bdevs_discovered": 2, 00:09:42.513 "num_base_bdevs_operational": 2, 00:09:42.513 "base_bdevs_list": [ 00:09:42.513 { 00:09:42.513 "name": null, 00:09:42.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.513 "is_configured": false, 00:09:42.513 "data_offset": 0, 00:09:42.513 "data_size": 63488 00:09:42.513 }, 00:09:42.513 { 00:09:42.513 "name": "pt2", 00:09:42.513 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:42.513 "is_configured": true, 00:09:42.513 "data_offset": 2048, 00:09:42.513 "data_size": 63488 00:09:42.513 }, 00:09:42.513 { 00:09:42.513 "name": "pt3", 00:09:42.513 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:42.513 "is_configured": true, 00:09:42.513 "data_offset": 2048, 00:09:42.513 "data_size": 63488 00:09:42.513 } 00:09:42.513 ] 00:09:42.513 }' 00:09:42.513 12:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.513 12:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.773 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:42.773 12:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.773 12:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.034 [2024-11-19 12:29:48.034500] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:43.034 [2024-11-19 12:29:48.034535] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:43.034 [2024-11-19 12:29:48.034603] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:43.034 [2024-11-19 12:29:48.034660] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:43.034 [2024-11-19 12:29:48.034669] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.034 [2024-11-19 12:29:48.106367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:43.034 [2024-11-19 12:29:48.106473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.034 [2024-11-19 12:29:48.106506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:43.034 [2024-11-19 12:29:48.106531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.034 [2024-11-19 12:29:48.108715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.034 [2024-11-19 12:29:48.108789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:43.034 [2024-11-19 12:29:48.108882] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:43.034 [2024-11-19 12:29:48.108937] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:43.034 pt2 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.034 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.034 "name": "raid_bdev1", 00:09:43.034 "uuid": "5a4a80e3-c4ba-4567-8391-6c9328af6be5", 00:09:43.034 "strip_size_kb": 0, 00:09:43.034 "state": "configuring", 00:09:43.034 "raid_level": "raid1", 00:09:43.034 "superblock": true, 00:09:43.035 "num_base_bdevs": 3, 00:09:43.035 "num_base_bdevs_discovered": 1, 00:09:43.035 "num_base_bdevs_operational": 2, 00:09:43.035 "base_bdevs_list": [ 00:09:43.035 { 00:09:43.035 "name": null, 00:09:43.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.035 "is_configured": false, 00:09:43.035 "data_offset": 2048, 00:09:43.035 "data_size": 63488 00:09:43.035 }, 00:09:43.035 { 00:09:43.035 "name": "pt2", 00:09:43.035 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:43.035 "is_configured": true, 00:09:43.035 "data_offset": 2048, 00:09:43.035 "data_size": 63488 00:09:43.035 }, 00:09:43.035 { 00:09:43.035 "name": null, 00:09:43.035 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:43.035 "is_configured": false, 00:09:43.035 "data_offset": 2048, 00:09:43.035 "data_size": 63488 00:09:43.035 } 00:09:43.035 ] 00:09:43.035 }' 00:09:43.035 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.035 12:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.295 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:43.295 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:43.295 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:43.295 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:43.295 12:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.295 12:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.295 [2024-11-19 12:29:48.529715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:43.295 [2024-11-19 12:29:48.529790] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.295 [2024-11-19 12:29:48.529812] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:43.295 [2024-11-19 12:29:48.529821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.295 [2024-11-19 12:29:48.530194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.295 [2024-11-19 12:29:48.530211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:43.295 [2024-11-19 12:29:48.530278] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:43.295 [2024-11-19 12:29:48.530299] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:43.295 [2024-11-19 12:29:48.530394] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:43.295 [2024-11-19 12:29:48.530403] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:43.295 [2024-11-19 12:29:48.530635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:43.295 [2024-11-19 12:29:48.530782] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:43.295 [2024-11-19 12:29:48.530794] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:43.295 [2024-11-19 12:29:48.530895] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.295 pt3 00:09:43.295 12:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.295 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:43.295 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.295 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.295 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.295 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.295 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:43.295 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.295 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.295 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.295 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.295 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.295 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.295 12:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.295 12:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.555 12:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.555 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.555 "name": "raid_bdev1", 00:09:43.555 "uuid": "5a4a80e3-c4ba-4567-8391-6c9328af6be5", 00:09:43.555 "strip_size_kb": 0, 00:09:43.555 "state": "online", 00:09:43.555 "raid_level": "raid1", 00:09:43.555 "superblock": true, 00:09:43.555 "num_base_bdevs": 3, 00:09:43.555 "num_base_bdevs_discovered": 2, 00:09:43.555 "num_base_bdevs_operational": 2, 00:09:43.555 "base_bdevs_list": [ 00:09:43.555 { 00:09:43.555 "name": null, 00:09:43.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.555 "is_configured": false, 00:09:43.555 "data_offset": 2048, 00:09:43.555 "data_size": 63488 00:09:43.555 }, 00:09:43.555 { 00:09:43.555 "name": "pt2", 00:09:43.555 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:43.555 "is_configured": true, 00:09:43.555 "data_offset": 2048, 00:09:43.555 "data_size": 63488 00:09:43.555 }, 00:09:43.555 { 00:09:43.555 "name": "pt3", 00:09:43.555 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:43.555 "is_configured": true, 00:09:43.555 "data_offset": 2048, 00:09:43.555 "data_size": 63488 00:09:43.555 } 00:09:43.555 ] 00:09:43.555 }' 00:09:43.555 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.555 12:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.815 12:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:43.815 12:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.815 12:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.815 [2024-11-19 12:29:49.004915] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:43.815 [2024-11-19 12:29:49.005022] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:43.815 [2024-11-19 12:29:49.005126] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:43.815 [2024-11-19 12:29:49.005199] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:43.815 [2024-11-19 12:29:49.005257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:43.815 12:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.815 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.815 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:43.815 12:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.815 12:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.815 12:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.815 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:43.815 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:43.815 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:43.815 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:43.815 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:43.815 12:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.815 12:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.815 12:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.076 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:44.076 12:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.076 12:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.076 [2024-11-19 12:29:49.080712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:44.076 [2024-11-19 12:29:49.080847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.076 [2024-11-19 12:29:49.080884] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:44.076 [2024-11-19 12:29:49.080913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.076 [2024-11-19 12:29:49.083089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.076 [2024-11-19 12:29:49.083158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:44.076 [2024-11-19 12:29:49.083248] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:44.076 [2024-11-19 12:29:49.083306] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:44.076 [2024-11-19 12:29:49.083429] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:44.076 [2024-11-19 12:29:49.083486] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:44.076 [2024-11-19 12:29:49.083553] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:09:44.076 [2024-11-19 12:29:49.083629] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:44.076 pt1 00:09:44.076 12:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.076 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:44.076 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:44.076 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.076 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.076 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.076 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.076 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:44.076 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.076 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.076 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.077 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.077 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.077 12:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.077 12:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.077 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.077 12:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.077 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.077 "name": "raid_bdev1", 00:09:44.077 "uuid": "5a4a80e3-c4ba-4567-8391-6c9328af6be5", 00:09:44.077 "strip_size_kb": 0, 00:09:44.077 "state": "configuring", 00:09:44.077 "raid_level": "raid1", 00:09:44.077 "superblock": true, 00:09:44.077 "num_base_bdevs": 3, 00:09:44.077 "num_base_bdevs_discovered": 1, 00:09:44.077 "num_base_bdevs_operational": 2, 00:09:44.077 "base_bdevs_list": [ 00:09:44.077 { 00:09:44.077 "name": null, 00:09:44.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.077 "is_configured": false, 00:09:44.077 "data_offset": 2048, 00:09:44.077 "data_size": 63488 00:09:44.077 }, 00:09:44.077 { 00:09:44.077 "name": "pt2", 00:09:44.077 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.077 "is_configured": true, 00:09:44.077 "data_offset": 2048, 00:09:44.077 "data_size": 63488 00:09:44.077 }, 00:09:44.077 { 00:09:44.077 "name": null, 00:09:44.077 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.077 "is_configured": false, 00:09:44.077 "data_offset": 2048, 00:09:44.077 "data_size": 63488 00:09:44.077 } 00:09:44.077 ] 00:09:44.077 }' 00:09:44.077 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.077 12:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.337 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:44.337 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:44.337 12:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.337 12:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.337 12:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.337 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:44.337 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:44.337 12:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.337 12:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.337 [2024-11-19 12:29:49.543930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:44.337 [2024-11-19 12:29:49.544014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.338 [2024-11-19 12:29:49.544034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:44.338 [2024-11-19 12:29:49.544046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.338 [2024-11-19 12:29:49.544455] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.338 [2024-11-19 12:29:49.544479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:44.338 [2024-11-19 12:29:49.544557] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:44.338 [2024-11-19 12:29:49.544600] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:44.338 [2024-11-19 12:29:49.544700] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:09:44.338 [2024-11-19 12:29:49.544712] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:44.338 [2024-11-19 12:29:49.544967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:44.338 [2024-11-19 12:29:49.545094] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:09:44.338 [2024-11-19 12:29:49.545104] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:09:44.338 [2024-11-19 12:29:49.545208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.338 pt3 00:09:44.338 12:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.338 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:44.338 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.338 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.338 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.338 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.338 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:44.338 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.338 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.338 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.338 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.338 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.338 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.338 12:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.338 12:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.338 12:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.597 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.597 "name": "raid_bdev1", 00:09:44.597 "uuid": "5a4a80e3-c4ba-4567-8391-6c9328af6be5", 00:09:44.597 "strip_size_kb": 0, 00:09:44.597 "state": "online", 00:09:44.597 "raid_level": "raid1", 00:09:44.597 "superblock": true, 00:09:44.597 "num_base_bdevs": 3, 00:09:44.597 "num_base_bdevs_discovered": 2, 00:09:44.597 "num_base_bdevs_operational": 2, 00:09:44.597 "base_bdevs_list": [ 00:09:44.597 { 00:09:44.597 "name": null, 00:09:44.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.597 "is_configured": false, 00:09:44.597 "data_offset": 2048, 00:09:44.597 "data_size": 63488 00:09:44.597 }, 00:09:44.597 { 00:09:44.597 "name": "pt2", 00:09:44.597 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.597 "is_configured": true, 00:09:44.597 "data_offset": 2048, 00:09:44.597 "data_size": 63488 00:09:44.597 }, 00:09:44.597 { 00:09:44.597 "name": "pt3", 00:09:44.597 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.597 "is_configured": true, 00:09:44.597 "data_offset": 2048, 00:09:44.597 "data_size": 63488 00:09:44.597 } 00:09:44.597 ] 00:09:44.597 }' 00:09:44.597 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.597 12:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.857 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:44.857 12:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:44.857 12:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.857 12:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.857 12:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.857 12:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:44.857 12:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:44.857 12:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:44.857 12:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.857 12:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.857 [2024-11-19 12:29:50.047352] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.857 12:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.858 12:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 5a4a80e3-c4ba-4567-8391-6c9328af6be5 '!=' 5a4a80e3-c4ba-4567-8391-6c9328af6be5 ']' 00:09:44.858 12:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 79842 00:09:44.858 12:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 79842 ']' 00:09:44.858 12:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 79842 00:09:44.858 12:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:44.858 12:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:44.858 12:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79842 00:09:45.118 killing process with pid 79842 00:09:45.118 12:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:45.118 12:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:45.118 12:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79842' 00:09:45.118 12:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 79842 00:09:45.118 [2024-11-19 12:29:50.125165] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:45.118 [2024-11-19 12:29:50.125251] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.118 [2024-11-19 12:29:50.125311] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.118 [2024-11-19 12:29:50.125321] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:09:45.118 12:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 79842 00:09:45.118 [2024-11-19 12:29:50.158173] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:45.378 12:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:45.378 00:09:45.378 real 0m6.632s 00:09:45.378 user 0m11.066s 00:09:45.378 sys 0m1.422s 00:09:45.378 12:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.378 ************************************ 00:09:45.378 END TEST raid_superblock_test 00:09:45.378 ************************************ 00:09:45.378 12:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.378 12:29:50 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:45.378 12:29:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:45.378 12:29:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:45.378 12:29:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:45.378 ************************************ 00:09:45.378 START TEST raid_read_error_test 00:09:45.378 ************************************ 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.NUNNYF3doH 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80274 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80274 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 80274 ']' 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:45.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:45.378 12:29:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.378 [2024-11-19 12:29:50.590512] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:45.378 [2024-11-19 12:29:50.590646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80274 ] 00:09:45.644 [2024-11-19 12:29:50.757708] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.644 [2024-11-19 12:29:50.805136] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.644 [2024-11-19 12:29:50.847151] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.644 [2024-11-19 12:29:50.847202] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.229 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:46.229 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:46.229 12:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:46.229 12:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:46.229 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.229 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.229 BaseBdev1_malloc 00:09:46.229 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.229 12:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:46.229 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.229 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.229 true 00:09:46.229 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.229 12:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:46.229 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.229 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.490 [2024-11-19 12:29:51.489334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:46.490 [2024-11-19 12:29:51.489429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.490 [2024-11-19 12:29:51.489457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:46.490 [2024-11-19 12:29:51.489466] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.490 [2024-11-19 12:29:51.491687] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.490 [2024-11-19 12:29:51.491755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:46.490 BaseBdev1 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.490 BaseBdev2_malloc 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.490 true 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.490 [2024-11-19 12:29:51.530700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:46.490 [2024-11-19 12:29:51.530846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.490 [2024-11-19 12:29:51.530877] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:46.490 [2024-11-19 12:29:51.530888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.490 [2024-11-19 12:29:51.533090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.490 [2024-11-19 12:29:51.533130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:46.490 BaseBdev2 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.490 BaseBdev3_malloc 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.490 true 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.490 [2024-11-19 12:29:51.559238] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:46.490 [2024-11-19 12:29:51.559327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.490 [2024-11-19 12:29:51.559380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:46.490 [2024-11-19 12:29:51.559420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.490 [2024-11-19 12:29:51.561572] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.490 [2024-11-19 12:29:51.561644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:46.490 BaseBdev3 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.490 [2024-11-19 12:29:51.571277] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:46.490 [2024-11-19 12:29:51.573126] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:46.490 [2024-11-19 12:29:51.573209] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:46.490 [2024-11-19 12:29:51.573377] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:46.490 [2024-11-19 12:29:51.573392] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:46.490 [2024-11-19 12:29:51.573656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:46.490 [2024-11-19 12:29:51.573833] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:46.490 [2024-11-19 12:29:51.573854] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:46.490 [2024-11-19 12:29:51.573986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.490 "name": "raid_bdev1", 00:09:46.490 "uuid": "762ea806-d610-4c67-bfdf-80b2cc145101", 00:09:46.490 "strip_size_kb": 0, 00:09:46.490 "state": "online", 00:09:46.490 "raid_level": "raid1", 00:09:46.490 "superblock": true, 00:09:46.490 "num_base_bdevs": 3, 00:09:46.490 "num_base_bdevs_discovered": 3, 00:09:46.490 "num_base_bdevs_operational": 3, 00:09:46.490 "base_bdevs_list": [ 00:09:46.490 { 00:09:46.490 "name": "BaseBdev1", 00:09:46.490 "uuid": "7cfb2ded-14a5-5f1a-a8ad-9d0e1c0d5866", 00:09:46.490 "is_configured": true, 00:09:46.490 "data_offset": 2048, 00:09:46.490 "data_size": 63488 00:09:46.490 }, 00:09:46.490 { 00:09:46.490 "name": "BaseBdev2", 00:09:46.490 "uuid": "b03361d7-97cb-5526-bfbd-f59458b15cc5", 00:09:46.490 "is_configured": true, 00:09:46.490 "data_offset": 2048, 00:09:46.490 "data_size": 63488 00:09:46.490 }, 00:09:46.490 { 00:09:46.490 "name": "BaseBdev3", 00:09:46.490 "uuid": "755de6d5-adec-5da7-b2f2-9a3fe1b1cc75", 00:09:46.490 "is_configured": true, 00:09:46.490 "data_offset": 2048, 00:09:46.490 "data_size": 63488 00:09:46.490 } 00:09:46.490 ] 00:09:46.490 }' 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.490 12:29:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.750 12:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:46.750 12:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:47.010 [2024-11-19 12:29:52.090898] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:47.949 12:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:47.949 12:29:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.949 12:29:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.949 12:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.949 12:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:47.949 12:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:47.949 12:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:47.949 12:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:47.950 12:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:47.950 12:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.950 12:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.950 12:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.950 12:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.950 12:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.950 12:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.950 12:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.950 12:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.950 12:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.950 12:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.950 12:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.950 12:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.950 12:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.950 12:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.950 12:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.950 "name": "raid_bdev1", 00:09:47.950 "uuid": "762ea806-d610-4c67-bfdf-80b2cc145101", 00:09:47.950 "strip_size_kb": 0, 00:09:47.950 "state": "online", 00:09:47.950 "raid_level": "raid1", 00:09:47.950 "superblock": true, 00:09:47.950 "num_base_bdevs": 3, 00:09:47.950 "num_base_bdevs_discovered": 3, 00:09:47.950 "num_base_bdevs_operational": 3, 00:09:47.950 "base_bdevs_list": [ 00:09:47.950 { 00:09:47.950 "name": "BaseBdev1", 00:09:47.950 "uuid": "7cfb2ded-14a5-5f1a-a8ad-9d0e1c0d5866", 00:09:47.950 "is_configured": true, 00:09:47.950 "data_offset": 2048, 00:09:47.950 "data_size": 63488 00:09:47.950 }, 00:09:47.950 { 00:09:47.950 "name": "BaseBdev2", 00:09:47.950 "uuid": "b03361d7-97cb-5526-bfbd-f59458b15cc5", 00:09:47.950 "is_configured": true, 00:09:47.950 "data_offset": 2048, 00:09:47.950 "data_size": 63488 00:09:47.950 }, 00:09:47.950 { 00:09:47.950 "name": "BaseBdev3", 00:09:47.950 "uuid": "755de6d5-adec-5da7-b2f2-9a3fe1b1cc75", 00:09:47.950 "is_configured": true, 00:09:47.950 "data_offset": 2048, 00:09:47.950 "data_size": 63488 00:09:47.950 } 00:09:47.950 ] 00:09:47.950 }' 00:09:47.950 12:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.950 12:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.519 12:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:48.519 12:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.519 12:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.519 [2024-11-19 12:29:53.482336] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:48.519 [2024-11-19 12:29:53.482412] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:48.519 [2024-11-19 12:29:53.484950] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:48.519 [2024-11-19 12:29:53.485037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.519 [2024-11-19 12:29:53.485176] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:48.519 [2024-11-19 12:29:53.485247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:48.519 { 00:09:48.519 "results": [ 00:09:48.519 { 00:09:48.519 "job": "raid_bdev1", 00:09:48.519 "core_mask": "0x1", 00:09:48.519 "workload": "randrw", 00:09:48.519 "percentage": 50, 00:09:48.519 "status": "finished", 00:09:48.519 "queue_depth": 1, 00:09:48.519 "io_size": 131072, 00:09:48.519 "runtime": 1.392296, 00:09:48.519 "iops": 14173.710188063458, 00:09:48.519 "mibps": 1771.7137735079323, 00:09:48.519 "io_failed": 0, 00:09:48.519 "io_timeout": 0, 00:09:48.519 "avg_latency_us": 67.97567968390068, 00:09:48.519 "min_latency_us": 22.91703056768559, 00:09:48.519 "max_latency_us": 1452.380786026201 00:09:48.519 } 00:09:48.519 ], 00:09:48.519 "core_count": 1 00:09:48.519 } 00:09:48.519 12:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.519 12:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80274 00:09:48.519 12:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 80274 ']' 00:09:48.519 12:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 80274 00:09:48.519 12:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:48.519 12:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:48.519 12:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80274 00:09:48.519 killing process with pid 80274 00:09:48.519 12:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:48.519 12:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:48.519 12:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80274' 00:09:48.519 12:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 80274 00:09:48.520 12:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 80274 00:09:48.520 [2024-11-19 12:29:53.527263] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:48.520 [2024-11-19 12:29:53.553331] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:48.779 12:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.NUNNYF3doH 00:09:48.779 12:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:48.779 12:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:48.779 12:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:48.779 12:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:48.779 12:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:48.779 12:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:48.779 12:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:48.779 00:09:48.779 real 0m3.324s 00:09:48.779 user 0m4.194s 00:09:48.779 sys 0m0.569s 00:09:48.779 12:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:48.779 12:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.779 ************************************ 00:09:48.779 END TEST raid_read_error_test 00:09:48.779 ************************************ 00:09:48.779 12:29:53 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:48.779 12:29:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:48.779 12:29:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:48.779 12:29:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:48.779 ************************************ 00:09:48.779 START TEST raid_write_error_test 00:09:48.779 ************************************ 00:09:48.779 12:29:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:09:48.779 12:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:48.779 12:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:48.779 12:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:48.779 12:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:48.780 12:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.780 12:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:48.780 12:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.780 12:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.780 12:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:48.780 12:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.780 12:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.780 12:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:48.780 12:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.780 12:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.780 12:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:48.780 12:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:48.780 12:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:48.780 12:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:48.780 12:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:48.780 12:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:48.780 12:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:48.780 12:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:48.780 12:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:48.780 12:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:48.780 12:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.zaZr9fJ5Cb 00:09:48.780 12:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80406 00:09:48.780 12:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:48.780 12:29:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80406 00:09:48.780 12:29:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 80406 ']' 00:09:48.780 12:29:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.780 12:29:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:48.780 12:29:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.780 12:29:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:48.780 12:29:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.780 [2024-11-19 12:29:53.983081] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:48.780 [2024-11-19 12:29:53.983209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80406 ] 00:09:49.039 [2024-11-19 12:29:54.142639] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.039 [2024-11-19 12:29:54.187615] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.039 [2024-11-19 12:29:54.229013] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.039 [2024-11-19 12:29:54.229050] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.608 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:49.608 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:49.608 12:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.608 12:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:49.608 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.608 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.608 BaseBdev1_malloc 00:09:49.608 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.608 12:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:49.608 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.608 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.608 true 00:09:49.608 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.608 12:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:49.608 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.608 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.608 [2024-11-19 12:29:54.846731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:49.608 [2024-11-19 12:29:54.846793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.608 [2024-11-19 12:29:54.846813] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:49.608 [2024-11-19 12:29:54.846822] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.608 [2024-11-19 12:29:54.848998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.608 [2024-11-19 12:29:54.849035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:49.608 BaseBdev1 00:09:49.608 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.608 12:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.608 12:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:49.608 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.608 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.867 BaseBdev2_malloc 00:09:49.867 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.867 12:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:49.867 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.867 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.867 true 00:09:49.867 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.867 12:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:49.867 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.867 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.867 [2024-11-19 12:29:54.884612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:49.867 [2024-11-19 12:29:54.884712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.867 [2024-11-19 12:29:54.884735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:49.867 [2024-11-19 12:29:54.884758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.867 [2024-11-19 12:29:54.886995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.867 [2024-11-19 12:29:54.887027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:49.867 BaseBdev2 00:09:49.867 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.867 12:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.867 12:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:49.867 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.868 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.868 BaseBdev3_malloc 00:09:49.868 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.868 12:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:49.868 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.868 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.868 true 00:09:49.868 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.868 12:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:49.868 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.868 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.868 [2024-11-19 12:29:54.913374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:49.868 [2024-11-19 12:29:54.913419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.868 [2024-11-19 12:29:54.913438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:49.868 [2024-11-19 12:29:54.913446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.868 [2024-11-19 12:29:54.915490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.868 [2024-11-19 12:29:54.915523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:49.868 BaseBdev3 00:09:49.868 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.868 12:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:49.868 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.868 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.868 [2024-11-19 12:29:54.921413] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:49.868 [2024-11-19 12:29:54.923208] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:49.868 [2024-11-19 12:29:54.923291] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:49.868 [2024-11-19 12:29:54.923461] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:49.868 [2024-11-19 12:29:54.923489] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:49.868 [2024-11-19 12:29:54.923734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:49.868 [2024-11-19 12:29:54.923906] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:49.868 [2024-11-19 12:29:54.923922] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:49.868 [2024-11-19 12:29:54.924049] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.868 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.868 12:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:49.868 12:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.868 12:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.868 12:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.868 12:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.868 12:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.868 12:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.868 12:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.868 12:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.868 12:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.868 12:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.868 12:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.868 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.868 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.868 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.868 12:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.868 "name": "raid_bdev1", 00:09:49.868 "uuid": "9afae65b-57cd-40e9-b768-94d666cbe4d5", 00:09:49.868 "strip_size_kb": 0, 00:09:49.868 "state": "online", 00:09:49.868 "raid_level": "raid1", 00:09:49.868 "superblock": true, 00:09:49.868 "num_base_bdevs": 3, 00:09:49.868 "num_base_bdevs_discovered": 3, 00:09:49.868 "num_base_bdevs_operational": 3, 00:09:49.868 "base_bdevs_list": [ 00:09:49.868 { 00:09:49.868 "name": "BaseBdev1", 00:09:49.868 "uuid": "be9a6f71-736e-5cd1-8095-98694d3a2bf0", 00:09:49.868 "is_configured": true, 00:09:49.868 "data_offset": 2048, 00:09:49.868 "data_size": 63488 00:09:49.868 }, 00:09:49.868 { 00:09:49.868 "name": "BaseBdev2", 00:09:49.868 "uuid": "355278e1-8590-5c28-af58-6374934b567e", 00:09:49.868 "is_configured": true, 00:09:49.868 "data_offset": 2048, 00:09:49.868 "data_size": 63488 00:09:49.868 }, 00:09:49.868 { 00:09:49.868 "name": "BaseBdev3", 00:09:49.868 "uuid": "dedfc46a-f73c-5139-a089-5c2164515736", 00:09:49.868 "is_configured": true, 00:09:49.868 "data_offset": 2048, 00:09:49.868 "data_size": 63488 00:09:49.868 } 00:09:49.868 ] 00:09:49.868 }' 00:09:49.868 12:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.868 12:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.128 12:29:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:50.128 12:29:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:50.388 [2024-11-19 12:29:55.464839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:51.326 12:29:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:51.326 12:29:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.326 12:29:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.326 [2024-11-19 12:29:56.359993] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:51.326 [2024-11-19 12:29:56.360047] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:51.326 [2024-11-19 12:29:56.360267] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:09:51.326 12:29:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.326 12:29:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:51.326 12:29:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:51.327 12:29:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:51.327 12:29:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:51.327 12:29:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:51.327 12:29:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:51.327 12:29:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:51.327 12:29:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.327 12:29:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.327 12:29:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:51.327 12:29:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.327 12:29:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.327 12:29:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.327 12:29:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.327 12:29:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.327 12:29:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.327 12:29:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.327 12:29:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.327 12:29:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.327 12:29:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.327 "name": "raid_bdev1", 00:09:51.327 "uuid": "9afae65b-57cd-40e9-b768-94d666cbe4d5", 00:09:51.327 "strip_size_kb": 0, 00:09:51.327 "state": "online", 00:09:51.327 "raid_level": "raid1", 00:09:51.327 "superblock": true, 00:09:51.327 "num_base_bdevs": 3, 00:09:51.327 "num_base_bdevs_discovered": 2, 00:09:51.327 "num_base_bdevs_operational": 2, 00:09:51.327 "base_bdevs_list": [ 00:09:51.327 { 00:09:51.327 "name": null, 00:09:51.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.327 "is_configured": false, 00:09:51.327 "data_offset": 0, 00:09:51.327 "data_size": 63488 00:09:51.327 }, 00:09:51.327 { 00:09:51.327 "name": "BaseBdev2", 00:09:51.327 "uuid": "355278e1-8590-5c28-af58-6374934b567e", 00:09:51.327 "is_configured": true, 00:09:51.327 "data_offset": 2048, 00:09:51.327 "data_size": 63488 00:09:51.327 }, 00:09:51.327 { 00:09:51.327 "name": "BaseBdev3", 00:09:51.327 "uuid": "dedfc46a-f73c-5139-a089-5c2164515736", 00:09:51.327 "is_configured": true, 00:09:51.327 "data_offset": 2048, 00:09:51.327 "data_size": 63488 00:09:51.327 } 00:09:51.327 ] 00:09:51.327 }' 00:09:51.327 12:29:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.327 12:29:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.587 12:29:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:51.587 12:29:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.587 12:29:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.587 [2024-11-19 12:29:56.822465] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:51.587 [2024-11-19 12:29:56.822505] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.587 [2024-11-19 12:29:56.824907] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.587 [2024-11-19 12:29:56.824954] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.587 [2024-11-19 12:29:56.825035] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.587 [2024-11-19 12:29:56.825051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:51.587 12:29:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.587 { 00:09:51.587 "results": [ 00:09:51.587 { 00:09:51.587 "job": "raid_bdev1", 00:09:51.587 "core_mask": "0x1", 00:09:51.587 "workload": "randrw", 00:09:51.587 "percentage": 50, 00:09:51.587 "status": "finished", 00:09:51.587 "queue_depth": 1, 00:09:51.587 "io_size": 131072, 00:09:51.587 "runtime": 1.358508, 00:09:51.587 "iops": 15865.199174388374, 00:09:51.587 "mibps": 1983.1498967985467, 00:09:51.587 "io_failed": 0, 00:09:51.587 "io_timeout": 0, 00:09:51.587 "avg_latency_us": 60.45308631894931, 00:09:51.587 "min_latency_us": 22.022707423580787, 00:09:51.587 "max_latency_us": 1387.989519650655 00:09:51.587 } 00:09:51.587 ], 00:09:51.587 "core_count": 1 00:09:51.587 } 00:09:51.587 12:29:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80406 00:09:51.587 12:29:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 80406 ']' 00:09:51.587 12:29:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 80406 00:09:51.587 12:29:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:51.587 12:29:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:51.587 12:29:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80406 00:09:51.847 12:29:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:51.847 12:29:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:51.847 killing process with pid 80406 00:09:51.847 12:29:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80406' 00:09:51.847 12:29:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 80406 00:09:51.847 [2024-11-19 12:29:56.867846] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:51.847 12:29:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 80406 00:09:51.847 [2024-11-19 12:29:56.893838] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:52.107 12:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.zaZr9fJ5Cb 00:09:52.107 12:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:52.107 12:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:52.107 12:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:52.107 12:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:52.107 12:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:52.107 12:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:52.107 12:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:52.107 00:09:52.107 real 0m3.268s 00:09:52.107 user 0m4.090s 00:09:52.107 sys 0m0.576s 00:09:52.107 12:29:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:52.107 12:29:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.107 ************************************ 00:09:52.107 END TEST raid_write_error_test 00:09:52.107 ************************************ 00:09:52.107 12:29:57 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:52.107 12:29:57 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:52.107 12:29:57 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:09:52.107 12:29:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:52.107 12:29:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:52.107 12:29:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:52.107 ************************************ 00:09:52.107 START TEST raid_state_function_test 00:09:52.107 ************************************ 00:09:52.107 12:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:09:52.107 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:52.107 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:52.107 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:52.107 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:52.107 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:52.107 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.107 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:52.107 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:52.107 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.107 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:52.107 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:52.107 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.107 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:52.107 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:52.107 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.107 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:52.107 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:52.107 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.107 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:52.107 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:52.107 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:52.107 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:52.107 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:52.107 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:52.107 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:52.107 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:52.107 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:52.107 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:52.107 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:52.107 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80533 00:09:52.108 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:52.108 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80533' 00:09:52.108 Process raid pid: 80533 00:09:52.108 12:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80533 00:09:52.108 12:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 80533 ']' 00:09:52.108 12:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.108 12:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:52.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.108 12:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.108 12:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:52.108 12:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.108 [2024-11-19 12:29:57.322301] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:52.108 [2024-11-19 12:29:57.322445] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.368 [2024-11-19 12:29:57.487373] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.368 [2024-11-19 12:29:57.537841] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.368 [2024-11-19 12:29:57.579169] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.368 [2024-11-19 12:29:57.579210] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.938 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:52.938 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:52.938 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:52.938 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.938 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.938 [2024-11-19 12:29:58.164042] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:52.938 [2024-11-19 12:29:58.164100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:52.938 [2024-11-19 12:29:58.164111] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:52.938 [2024-11-19 12:29:58.164121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:52.938 [2024-11-19 12:29:58.164127] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:52.938 [2024-11-19 12:29:58.164140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:52.938 [2024-11-19 12:29:58.164146] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:52.938 [2024-11-19 12:29:58.164154] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:52.938 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.938 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:52.938 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.938 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.938 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.938 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.938 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.938 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.938 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.938 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.938 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.938 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.938 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.938 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.938 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.938 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.198 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.198 "name": "Existed_Raid", 00:09:53.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.198 "strip_size_kb": 64, 00:09:53.198 "state": "configuring", 00:09:53.198 "raid_level": "raid0", 00:09:53.198 "superblock": false, 00:09:53.198 "num_base_bdevs": 4, 00:09:53.198 "num_base_bdevs_discovered": 0, 00:09:53.198 "num_base_bdevs_operational": 4, 00:09:53.198 "base_bdevs_list": [ 00:09:53.198 { 00:09:53.198 "name": "BaseBdev1", 00:09:53.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.198 "is_configured": false, 00:09:53.198 "data_offset": 0, 00:09:53.198 "data_size": 0 00:09:53.198 }, 00:09:53.198 { 00:09:53.198 "name": "BaseBdev2", 00:09:53.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.198 "is_configured": false, 00:09:53.198 "data_offset": 0, 00:09:53.198 "data_size": 0 00:09:53.198 }, 00:09:53.198 { 00:09:53.198 "name": "BaseBdev3", 00:09:53.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.198 "is_configured": false, 00:09:53.198 "data_offset": 0, 00:09:53.198 "data_size": 0 00:09:53.198 }, 00:09:53.198 { 00:09:53.198 "name": "BaseBdev4", 00:09:53.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.198 "is_configured": false, 00:09:53.198 "data_offset": 0, 00:09:53.198 "data_size": 0 00:09:53.198 } 00:09:53.198 ] 00:09:53.198 }' 00:09:53.198 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.198 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.458 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:53.458 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.458 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.458 [2024-11-19 12:29:58.603188] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:53.458 [2024-11-19 12:29:58.603238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:53.458 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.458 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:53.458 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.458 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.458 [2024-11-19 12:29:58.615230] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:53.458 [2024-11-19 12:29:58.615275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:53.458 [2024-11-19 12:29:58.615285] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:53.458 [2024-11-19 12:29:58.615296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:53.458 [2024-11-19 12:29:58.615304] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:53.458 [2024-11-19 12:29:58.615315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:53.458 [2024-11-19 12:29:58.615322] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:53.458 [2024-11-19 12:29:58.615332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:53.458 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.458 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:53.458 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.458 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.458 [2024-11-19 12:29:58.636083] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:53.458 BaseBdev1 00:09:53.458 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.458 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:53.458 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:53.458 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:53.459 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:53.459 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:53.459 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:53.459 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:53.459 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.459 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.459 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.459 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:53.459 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.459 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.459 [ 00:09:53.459 { 00:09:53.459 "name": "BaseBdev1", 00:09:53.459 "aliases": [ 00:09:53.459 "2d5d9d39-4fe6-4313-8059-f0b32cd1ad03" 00:09:53.459 ], 00:09:53.459 "product_name": "Malloc disk", 00:09:53.459 "block_size": 512, 00:09:53.459 "num_blocks": 65536, 00:09:53.459 "uuid": "2d5d9d39-4fe6-4313-8059-f0b32cd1ad03", 00:09:53.459 "assigned_rate_limits": { 00:09:53.459 "rw_ios_per_sec": 0, 00:09:53.459 "rw_mbytes_per_sec": 0, 00:09:53.459 "r_mbytes_per_sec": 0, 00:09:53.459 "w_mbytes_per_sec": 0 00:09:53.459 }, 00:09:53.459 "claimed": true, 00:09:53.459 "claim_type": "exclusive_write", 00:09:53.459 "zoned": false, 00:09:53.459 "supported_io_types": { 00:09:53.459 "read": true, 00:09:53.459 "write": true, 00:09:53.459 "unmap": true, 00:09:53.459 "flush": true, 00:09:53.459 "reset": true, 00:09:53.459 "nvme_admin": false, 00:09:53.459 "nvme_io": false, 00:09:53.459 "nvme_io_md": false, 00:09:53.459 "write_zeroes": true, 00:09:53.459 "zcopy": true, 00:09:53.459 "get_zone_info": false, 00:09:53.459 "zone_management": false, 00:09:53.459 "zone_append": false, 00:09:53.459 "compare": false, 00:09:53.459 "compare_and_write": false, 00:09:53.459 "abort": true, 00:09:53.459 "seek_hole": false, 00:09:53.459 "seek_data": false, 00:09:53.459 "copy": true, 00:09:53.459 "nvme_iov_md": false 00:09:53.459 }, 00:09:53.459 "memory_domains": [ 00:09:53.459 { 00:09:53.459 "dma_device_id": "system", 00:09:53.459 "dma_device_type": 1 00:09:53.459 }, 00:09:53.459 { 00:09:53.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.459 "dma_device_type": 2 00:09:53.459 } 00:09:53.459 ], 00:09:53.459 "driver_specific": {} 00:09:53.459 } 00:09:53.459 ] 00:09:53.459 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.459 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:53.459 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:53.459 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.459 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.459 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.459 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.459 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.459 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.459 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.459 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.459 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.459 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.459 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.459 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.459 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.459 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.719 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.719 "name": "Existed_Raid", 00:09:53.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.719 "strip_size_kb": 64, 00:09:53.719 "state": "configuring", 00:09:53.719 "raid_level": "raid0", 00:09:53.719 "superblock": false, 00:09:53.719 "num_base_bdevs": 4, 00:09:53.719 "num_base_bdevs_discovered": 1, 00:09:53.719 "num_base_bdevs_operational": 4, 00:09:53.719 "base_bdevs_list": [ 00:09:53.719 { 00:09:53.719 "name": "BaseBdev1", 00:09:53.719 "uuid": "2d5d9d39-4fe6-4313-8059-f0b32cd1ad03", 00:09:53.719 "is_configured": true, 00:09:53.719 "data_offset": 0, 00:09:53.719 "data_size": 65536 00:09:53.719 }, 00:09:53.719 { 00:09:53.719 "name": "BaseBdev2", 00:09:53.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.719 "is_configured": false, 00:09:53.719 "data_offset": 0, 00:09:53.719 "data_size": 0 00:09:53.719 }, 00:09:53.719 { 00:09:53.719 "name": "BaseBdev3", 00:09:53.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.719 "is_configured": false, 00:09:53.719 "data_offset": 0, 00:09:53.719 "data_size": 0 00:09:53.719 }, 00:09:53.719 { 00:09:53.719 "name": "BaseBdev4", 00:09:53.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.719 "is_configured": false, 00:09:53.719 "data_offset": 0, 00:09:53.719 "data_size": 0 00:09:53.719 } 00:09:53.719 ] 00:09:53.719 }' 00:09:53.719 12:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.719 12:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.979 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:53.979 12:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.979 12:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.979 [2024-11-19 12:29:59.119340] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:53.979 [2024-11-19 12:29:59.119403] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:53.979 12:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.979 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:53.979 12:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.979 12:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.979 [2024-11-19 12:29:59.131339] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:53.979 [2024-11-19 12:29:59.133133] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:53.979 [2024-11-19 12:29:59.133170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:53.979 [2024-11-19 12:29:59.133194] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:53.979 [2024-11-19 12:29:59.133202] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:53.979 [2024-11-19 12:29:59.133208] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:53.979 [2024-11-19 12:29:59.133216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:53.979 12:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.979 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:53.979 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:53.979 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:53.979 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.979 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.979 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.979 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.979 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.979 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.979 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.979 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.979 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.979 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.979 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.979 12:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.979 12:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.979 12:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.979 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.979 "name": "Existed_Raid", 00:09:53.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.979 "strip_size_kb": 64, 00:09:53.979 "state": "configuring", 00:09:53.979 "raid_level": "raid0", 00:09:53.979 "superblock": false, 00:09:53.979 "num_base_bdevs": 4, 00:09:53.979 "num_base_bdevs_discovered": 1, 00:09:53.979 "num_base_bdevs_operational": 4, 00:09:53.979 "base_bdevs_list": [ 00:09:53.979 { 00:09:53.979 "name": "BaseBdev1", 00:09:53.979 "uuid": "2d5d9d39-4fe6-4313-8059-f0b32cd1ad03", 00:09:53.979 "is_configured": true, 00:09:53.979 "data_offset": 0, 00:09:53.979 "data_size": 65536 00:09:53.979 }, 00:09:53.979 { 00:09:53.979 "name": "BaseBdev2", 00:09:53.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.979 "is_configured": false, 00:09:53.979 "data_offset": 0, 00:09:53.979 "data_size": 0 00:09:53.979 }, 00:09:53.979 { 00:09:53.979 "name": "BaseBdev3", 00:09:53.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.979 "is_configured": false, 00:09:53.979 "data_offset": 0, 00:09:53.979 "data_size": 0 00:09:53.979 }, 00:09:53.979 { 00:09:53.979 "name": "BaseBdev4", 00:09:53.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.979 "is_configured": false, 00:09:53.979 "data_offset": 0, 00:09:53.979 "data_size": 0 00:09:53.979 } 00:09:53.979 ] 00:09:53.979 }' 00:09:53.979 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.979 12:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.549 [2024-11-19 12:29:59.572634] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:54.549 BaseBdev2 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.549 [ 00:09:54.549 { 00:09:54.549 "name": "BaseBdev2", 00:09:54.549 "aliases": [ 00:09:54.549 "95de6dd7-4c12-44df-a5ef-ae0e4cea5390" 00:09:54.549 ], 00:09:54.549 "product_name": "Malloc disk", 00:09:54.549 "block_size": 512, 00:09:54.549 "num_blocks": 65536, 00:09:54.549 "uuid": "95de6dd7-4c12-44df-a5ef-ae0e4cea5390", 00:09:54.549 "assigned_rate_limits": { 00:09:54.549 "rw_ios_per_sec": 0, 00:09:54.549 "rw_mbytes_per_sec": 0, 00:09:54.549 "r_mbytes_per_sec": 0, 00:09:54.549 "w_mbytes_per_sec": 0 00:09:54.549 }, 00:09:54.549 "claimed": true, 00:09:54.549 "claim_type": "exclusive_write", 00:09:54.549 "zoned": false, 00:09:54.549 "supported_io_types": { 00:09:54.549 "read": true, 00:09:54.549 "write": true, 00:09:54.549 "unmap": true, 00:09:54.549 "flush": true, 00:09:54.549 "reset": true, 00:09:54.549 "nvme_admin": false, 00:09:54.549 "nvme_io": false, 00:09:54.549 "nvme_io_md": false, 00:09:54.549 "write_zeroes": true, 00:09:54.549 "zcopy": true, 00:09:54.549 "get_zone_info": false, 00:09:54.549 "zone_management": false, 00:09:54.549 "zone_append": false, 00:09:54.549 "compare": false, 00:09:54.549 "compare_and_write": false, 00:09:54.549 "abort": true, 00:09:54.549 "seek_hole": false, 00:09:54.549 "seek_data": false, 00:09:54.549 "copy": true, 00:09:54.549 "nvme_iov_md": false 00:09:54.549 }, 00:09:54.549 "memory_domains": [ 00:09:54.549 { 00:09:54.549 "dma_device_id": "system", 00:09:54.549 "dma_device_type": 1 00:09:54.549 }, 00:09:54.549 { 00:09:54.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.549 "dma_device_type": 2 00:09:54.549 } 00:09:54.549 ], 00:09:54.549 "driver_specific": {} 00:09:54.549 } 00:09:54.549 ] 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.549 "name": "Existed_Raid", 00:09:54.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.549 "strip_size_kb": 64, 00:09:54.549 "state": "configuring", 00:09:54.549 "raid_level": "raid0", 00:09:54.549 "superblock": false, 00:09:54.549 "num_base_bdevs": 4, 00:09:54.549 "num_base_bdevs_discovered": 2, 00:09:54.549 "num_base_bdevs_operational": 4, 00:09:54.549 "base_bdevs_list": [ 00:09:54.549 { 00:09:54.549 "name": "BaseBdev1", 00:09:54.549 "uuid": "2d5d9d39-4fe6-4313-8059-f0b32cd1ad03", 00:09:54.549 "is_configured": true, 00:09:54.549 "data_offset": 0, 00:09:54.549 "data_size": 65536 00:09:54.549 }, 00:09:54.549 { 00:09:54.549 "name": "BaseBdev2", 00:09:54.549 "uuid": "95de6dd7-4c12-44df-a5ef-ae0e4cea5390", 00:09:54.549 "is_configured": true, 00:09:54.549 "data_offset": 0, 00:09:54.549 "data_size": 65536 00:09:54.549 }, 00:09:54.549 { 00:09:54.549 "name": "BaseBdev3", 00:09:54.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.549 "is_configured": false, 00:09:54.549 "data_offset": 0, 00:09:54.549 "data_size": 0 00:09:54.549 }, 00:09:54.549 { 00:09:54.549 "name": "BaseBdev4", 00:09:54.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.549 "is_configured": false, 00:09:54.549 "data_offset": 0, 00:09:54.549 "data_size": 0 00:09:54.549 } 00:09:54.549 ] 00:09:54.549 }' 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.549 12:29:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.120 [2024-11-19 12:30:00.090743] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:55.120 BaseBdev3 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.120 [ 00:09:55.120 { 00:09:55.120 "name": "BaseBdev3", 00:09:55.120 "aliases": [ 00:09:55.120 "1ffba90f-98ac-4b7f-84df-773637bd8fc4" 00:09:55.120 ], 00:09:55.120 "product_name": "Malloc disk", 00:09:55.120 "block_size": 512, 00:09:55.120 "num_blocks": 65536, 00:09:55.120 "uuid": "1ffba90f-98ac-4b7f-84df-773637bd8fc4", 00:09:55.120 "assigned_rate_limits": { 00:09:55.120 "rw_ios_per_sec": 0, 00:09:55.120 "rw_mbytes_per_sec": 0, 00:09:55.120 "r_mbytes_per_sec": 0, 00:09:55.120 "w_mbytes_per_sec": 0 00:09:55.120 }, 00:09:55.120 "claimed": true, 00:09:55.120 "claim_type": "exclusive_write", 00:09:55.120 "zoned": false, 00:09:55.120 "supported_io_types": { 00:09:55.120 "read": true, 00:09:55.120 "write": true, 00:09:55.120 "unmap": true, 00:09:55.120 "flush": true, 00:09:55.120 "reset": true, 00:09:55.120 "nvme_admin": false, 00:09:55.120 "nvme_io": false, 00:09:55.120 "nvme_io_md": false, 00:09:55.120 "write_zeroes": true, 00:09:55.120 "zcopy": true, 00:09:55.120 "get_zone_info": false, 00:09:55.120 "zone_management": false, 00:09:55.120 "zone_append": false, 00:09:55.120 "compare": false, 00:09:55.120 "compare_and_write": false, 00:09:55.120 "abort": true, 00:09:55.120 "seek_hole": false, 00:09:55.120 "seek_data": false, 00:09:55.120 "copy": true, 00:09:55.120 "nvme_iov_md": false 00:09:55.120 }, 00:09:55.120 "memory_domains": [ 00:09:55.120 { 00:09:55.120 "dma_device_id": "system", 00:09:55.120 "dma_device_type": 1 00:09:55.120 }, 00:09:55.120 { 00:09:55.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.120 "dma_device_type": 2 00:09:55.120 } 00:09:55.120 ], 00:09:55.120 "driver_specific": {} 00:09:55.120 } 00:09:55.120 ] 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.120 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.120 "name": "Existed_Raid", 00:09:55.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.120 "strip_size_kb": 64, 00:09:55.120 "state": "configuring", 00:09:55.120 "raid_level": "raid0", 00:09:55.120 "superblock": false, 00:09:55.120 "num_base_bdevs": 4, 00:09:55.120 "num_base_bdevs_discovered": 3, 00:09:55.120 "num_base_bdevs_operational": 4, 00:09:55.120 "base_bdevs_list": [ 00:09:55.120 { 00:09:55.120 "name": "BaseBdev1", 00:09:55.120 "uuid": "2d5d9d39-4fe6-4313-8059-f0b32cd1ad03", 00:09:55.120 "is_configured": true, 00:09:55.120 "data_offset": 0, 00:09:55.120 "data_size": 65536 00:09:55.120 }, 00:09:55.120 { 00:09:55.121 "name": "BaseBdev2", 00:09:55.121 "uuid": "95de6dd7-4c12-44df-a5ef-ae0e4cea5390", 00:09:55.121 "is_configured": true, 00:09:55.121 "data_offset": 0, 00:09:55.121 "data_size": 65536 00:09:55.121 }, 00:09:55.121 { 00:09:55.121 "name": "BaseBdev3", 00:09:55.121 "uuid": "1ffba90f-98ac-4b7f-84df-773637bd8fc4", 00:09:55.121 "is_configured": true, 00:09:55.121 "data_offset": 0, 00:09:55.121 "data_size": 65536 00:09:55.121 }, 00:09:55.121 { 00:09:55.121 "name": "BaseBdev4", 00:09:55.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.121 "is_configured": false, 00:09:55.121 "data_offset": 0, 00:09:55.121 "data_size": 0 00:09:55.121 } 00:09:55.121 ] 00:09:55.121 }' 00:09:55.121 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.121 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.380 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:55.380 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.380 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.380 [2024-11-19 12:30:00.572841] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:55.380 [2024-11-19 12:30:00.572894] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:55.380 [2024-11-19 12:30:00.572903] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:55.381 [2024-11-19 12:30:00.573207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:55.381 [2024-11-19 12:30:00.573354] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:55.381 [2024-11-19 12:30:00.573371] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:55.381 [2024-11-19 12:30:00.573559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.381 BaseBdev4 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.381 [ 00:09:55.381 { 00:09:55.381 "name": "BaseBdev4", 00:09:55.381 "aliases": [ 00:09:55.381 "d6fa3926-bb85-4ce1-8cef-63be86341844" 00:09:55.381 ], 00:09:55.381 "product_name": "Malloc disk", 00:09:55.381 "block_size": 512, 00:09:55.381 "num_blocks": 65536, 00:09:55.381 "uuid": "d6fa3926-bb85-4ce1-8cef-63be86341844", 00:09:55.381 "assigned_rate_limits": { 00:09:55.381 "rw_ios_per_sec": 0, 00:09:55.381 "rw_mbytes_per_sec": 0, 00:09:55.381 "r_mbytes_per_sec": 0, 00:09:55.381 "w_mbytes_per_sec": 0 00:09:55.381 }, 00:09:55.381 "claimed": true, 00:09:55.381 "claim_type": "exclusive_write", 00:09:55.381 "zoned": false, 00:09:55.381 "supported_io_types": { 00:09:55.381 "read": true, 00:09:55.381 "write": true, 00:09:55.381 "unmap": true, 00:09:55.381 "flush": true, 00:09:55.381 "reset": true, 00:09:55.381 "nvme_admin": false, 00:09:55.381 "nvme_io": false, 00:09:55.381 "nvme_io_md": false, 00:09:55.381 "write_zeroes": true, 00:09:55.381 "zcopy": true, 00:09:55.381 "get_zone_info": false, 00:09:55.381 "zone_management": false, 00:09:55.381 "zone_append": false, 00:09:55.381 "compare": false, 00:09:55.381 "compare_and_write": false, 00:09:55.381 "abort": true, 00:09:55.381 "seek_hole": false, 00:09:55.381 "seek_data": false, 00:09:55.381 "copy": true, 00:09:55.381 "nvme_iov_md": false 00:09:55.381 }, 00:09:55.381 "memory_domains": [ 00:09:55.381 { 00:09:55.381 "dma_device_id": "system", 00:09:55.381 "dma_device_type": 1 00:09:55.381 }, 00:09:55.381 { 00:09:55.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.381 "dma_device_type": 2 00:09:55.381 } 00:09:55.381 ], 00:09:55.381 "driver_specific": {} 00:09:55.381 } 00:09:55.381 ] 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.381 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.641 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.641 "name": "Existed_Raid", 00:09:55.641 "uuid": "53e3b540-3f59-4096-ab16-790fdddd14fa", 00:09:55.641 "strip_size_kb": 64, 00:09:55.641 "state": "online", 00:09:55.641 "raid_level": "raid0", 00:09:55.641 "superblock": false, 00:09:55.641 "num_base_bdevs": 4, 00:09:55.641 "num_base_bdevs_discovered": 4, 00:09:55.641 "num_base_bdevs_operational": 4, 00:09:55.641 "base_bdevs_list": [ 00:09:55.641 { 00:09:55.641 "name": "BaseBdev1", 00:09:55.641 "uuid": "2d5d9d39-4fe6-4313-8059-f0b32cd1ad03", 00:09:55.641 "is_configured": true, 00:09:55.641 "data_offset": 0, 00:09:55.641 "data_size": 65536 00:09:55.641 }, 00:09:55.641 { 00:09:55.641 "name": "BaseBdev2", 00:09:55.641 "uuid": "95de6dd7-4c12-44df-a5ef-ae0e4cea5390", 00:09:55.641 "is_configured": true, 00:09:55.641 "data_offset": 0, 00:09:55.641 "data_size": 65536 00:09:55.641 }, 00:09:55.641 { 00:09:55.641 "name": "BaseBdev3", 00:09:55.641 "uuid": "1ffba90f-98ac-4b7f-84df-773637bd8fc4", 00:09:55.641 "is_configured": true, 00:09:55.641 "data_offset": 0, 00:09:55.641 "data_size": 65536 00:09:55.641 }, 00:09:55.641 { 00:09:55.641 "name": "BaseBdev4", 00:09:55.641 "uuid": "d6fa3926-bb85-4ce1-8cef-63be86341844", 00:09:55.641 "is_configured": true, 00:09:55.641 "data_offset": 0, 00:09:55.641 "data_size": 65536 00:09:55.641 } 00:09:55.641 ] 00:09:55.641 }' 00:09:55.641 12:30:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.641 12:30:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.900 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:55.900 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:55.900 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:55.900 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:55.900 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:55.900 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:55.900 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:55.900 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:55.900 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.900 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.900 [2024-11-19 12:30:01.076375] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:55.900 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.900 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:55.900 "name": "Existed_Raid", 00:09:55.900 "aliases": [ 00:09:55.900 "53e3b540-3f59-4096-ab16-790fdddd14fa" 00:09:55.900 ], 00:09:55.900 "product_name": "Raid Volume", 00:09:55.900 "block_size": 512, 00:09:55.900 "num_blocks": 262144, 00:09:55.900 "uuid": "53e3b540-3f59-4096-ab16-790fdddd14fa", 00:09:55.900 "assigned_rate_limits": { 00:09:55.900 "rw_ios_per_sec": 0, 00:09:55.900 "rw_mbytes_per_sec": 0, 00:09:55.900 "r_mbytes_per_sec": 0, 00:09:55.900 "w_mbytes_per_sec": 0 00:09:55.900 }, 00:09:55.900 "claimed": false, 00:09:55.900 "zoned": false, 00:09:55.900 "supported_io_types": { 00:09:55.900 "read": true, 00:09:55.900 "write": true, 00:09:55.900 "unmap": true, 00:09:55.900 "flush": true, 00:09:55.900 "reset": true, 00:09:55.900 "nvme_admin": false, 00:09:55.900 "nvme_io": false, 00:09:55.900 "nvme_io_md": false, 00:09:55.900 "write_zeroes": true, 00:09:55.900 "zcopy": false, 00:09:55.900 "get_zone_info": false, 00:09:55.900 "zone_management": false, 00:09:55.900 "zone_append": false, 00:09:55.900 "compare": false, 00:09:55.900 "compare_and_write": false, 00:09:55.900 "abort": false, 00:09:55.900 "seek_hole": false, 00:09:55.900 "seek_data": false, 00:09:55.900 "copy": false, 00:09:55.900 "nvme_iov_md": false 00:09:55.900 }, 00:09:55.900 "memory_domains": [ 00:09:55.901 { 00:09:55.901 "dma_device_id": "system", 00:09:55.901 "dma_device_type": 1 00:09:55.901 }, 00:09:55.901 { 00:09:55.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.901 "dma_device_type": 2 00:09:55.901 }, 00:09:55.901 { 00:09:55.901 "dma_device_id": "system", 00:09:55.901 "dma_device_type": 1 00:09:55.901 }, 00:09:55.901 { 00:09:55.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.901 "dma_device_type": 2 00:09:55.901 }, 00:09:55.901 { 00:09:55.901 "dma_device_id": "system", 00:09:55.901 "dma_device_type": 1 00:09:55.901 }, 00:09:55.901 { 00:09:55.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.901 "dma_device_type": 2 00:09:55.901 }, 00:09:55.901 { 00:09:55.901 "dma_device_id": "system", 00:09:55.901 "dma_device_type": 1 00:09:55.901 }, 00:09:55.901 { 00:09:55.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.901 "dma_device_type": 2 00:09:55.901 } 00:09:55.901 ], 00:09:55.901 "driver_specific": { 00:09:55.901 "raid": { 00:09:55.901 "uuid": "53e3b540-3f59-4096-ab16-790fdddd14fa", 00:09:55.901 "strip_size_kb": 64, 00:09:55.901 "state": "online", 00:09:55.901 "raid_level": "raid0", 00:09:55.901 "superblock": false, 00:09:55.901 "num_base_bdevs": 4, 00:09:55.901 "num_base_bdevs_discovered": 4, 00:09:55.901 "num_base_bdevs_operational": 4, 00:09:55.901 "base_bdevs_list": [ 00:09:55.901 { 00:09:55.901 "name": "BaseBdev1", 00:09:55.901 "uuid": "2d5d9d39-4fe6-4313-8059-f0b32cd1ad03", 00:09:55.901 "is_configured": true, 00:09:55.901 "data_offset": 0, 00:09:55.901 "data_size": 65536 00:09:55.901 }, 00:09:55.901 { 00:09:55.901 "name": "BaseBdev2", 00:09:55.901 "uuid": "95de6dd7-4c12-44df-a5ef-ae0e4cea5390", 00:09:55.901 "is_configured": true, 00:09:55.901 "data_offset": 0, 00:09:55.901 "data_size": 65536 00:09:55.901 }, 00:09:55.901 { 00:09:55.901 "name": "BaseBdev3", 00:09:55.901 "uuid": "1ffba90f-98ac-4b7f-84df-773637bd8fc4", 00:09:55.901 "is_configured": true, 00:09:55.901 "data_offset": 0, 00:09:55.901 "data_size": 65536 00:09:55.901 }, 00:09:55.901 { 00:09:55.901 "name": "BaseBdev4", 00:09:55.901 "uuid": "d6fa3926-bb85-4ce1-8cef-63be86341844", 00:09:55.901 "is_configured": true, 00:09:55.901 "data_offset": 0, 00:09:55.901 "data_size": 65536 00:09:55.901 } 00:09:55.901 ] 00:09:55.901 } 00:09:55.901 } 00:09:55.901 }' 00:09:55.901 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:56.159 BaseBdev2 00:09:56.159 BaseBdev3 00:09:56.159 BaseBdev4' 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.159 [2024-11-19 12:30:01.391518] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:56.159 [2024-11-19 12:30:01.391564] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:56.159 [2024-11-19 12:30:01.391631] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.159 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.417 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.417 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.417 "name": "Existed_Raid", 00:09:56.417 "uuid": "53e3b540-3f59-4096-ab16-790fdddd14fa", 00:09:56.417 "strip_size_kb": 64, 00:09:56.417 "state": "offline", 00:09:56.417 "raid_level": "raid0", 00:09:56.417 "superblock": false, 00:09:56.417 "num_base_bdevs": 4, 00:09:56.417 "num_base_bdevs_discovered": 3, 00:09:56.417 "num_base_bdevs_operational": 3, 00:09:56.417 "base_bdevs_list": [ 00:09:56.417 { 00:09:56.417 "name": null, 00:09:56.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.417 "is_configured": false, 00:09:56.417 "data_offset": 0, 00:09:56.417 "data_size": 65536 00:09:56.417 }, 00:09:56.417 { 00:09:56.417 "name": "BaseBdev2", 00:09:56.417 "uuid": "95de6dd7-4c12-44df-a5ef-ae0e4cea5390", 00:09:56.417 "is_configured": true, 00:09:56.417 "data_offset": 0, 00:09:56.417 "data_size": 65536 00:09:56.417 }, 00:09:56.417 { 00:09:56.417 "name": "BaseBdev3", 00:09:56.417 "uuid": "1ffba90f-98ac-4b7f-84df-773637bd8fc4", 00:09:56.417 "is_configured": true, 00:09:56.417 "data_offset": 0, 00:09:56.417 "data_size": 65536 00:09:56.417 }, 00:09:56.417 { 00:09:56.417 "name": "BaseBdev4", 00:09:56.417 "uuid": "d6fa3926-bb85-4ce1-8cef-63be86341844", 00:09:56.417 "is_configured": true, 00:09:56.417 "data_offset": 0, 00:09:56.417 "data_size": 65536 00:09:56.417 } 00:09:56.417 ] 00:09:56.417 }' 00:09:56.417 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.417 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.676 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:56.676 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:56.676 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.676 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:56.676 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.676 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.676 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.676 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:56.676 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:56.676 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:56.676 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.676 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.676 [2024-11-19 12:30:01.902033] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:56.676 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.676 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:56.676 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:56.676 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:56.676 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.676 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.676 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.935 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.935 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:56.935 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:56.935 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:56.935 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.935 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.935 [2024-11-19 12:30:01.957271] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:56.935 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.935 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:56.935 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:56.935 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:56.935 12:30:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.935 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.935 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.935 12:30:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.935 [2024-11-19 12:30:02.008381] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:56.935 [2024-11-19 12:30:02.008450] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.935 BaseBdev2 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.935 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.935 [ 00:09:56.935 { 00:09:56.935 "name": "BaseBdev2", 00:09:56.935 "aliases": [ 00:09:56.935 "21eef390-0a9f-433d-83ed-9c70f979971d" 00:09:56.935 ], 00:09:56.935 "product_name": "Malloc disk", 00:09:56.935 "block_size": 512, 00:09:56.935 "num_blocks": 65536, 00:09:56.935 "uuid": "21eef390-0a9f-433d-83ed-9c70f979971d", 00:09:56.935 "assigned_rate_limits": { 00:09:56.935 "rw_ios_per_sec": 0, 00:09:56.935 "rw_mbytes_per_sec": 0, 00:09:56.935 "r_mbytes_per_sec": 0, 00:09:56.935 "w_mbytes_per_sec": 0 00:09:56.935 }, 00:09:56.935 "claimed": false, 00:09:56.935 "zoned": false, 00:09:56.935 "supported_io_types": { 00:09:56.935 "read": true, 00:09:56.935 "write": true, 00:09:56.935 "unmap": true, 00:09:56.935 "flush": true, 00:09:56.935 "reset": true, 00:09:56.935 "nvme_admin": false, 00:09:56.935 "nvme_io": false, 00:09:56.935 "nvme_io_md": false, 00:09:56.935 "write_zeroes": true, 00:09:56.935 "zcopy": true, 00:09:56.935 "get_zone_info": false, 00:09:56.935 "zone_management": false, 00:09:56.935 "zone_append": false, 00:09:56.935 "compare": false, 00:09:56.935 "compare_and_write": false, 00:09:56.935 "abort": true, 00:09:56.935 "seek_hole": false, 00:09:56.935 "seek_data": false, 00:09:56.935 "copy": true, 00:09:56.935 "nvme_iov_md": false 00:09:56.935 }, 00:09:56.935 "memory_domains": [ 00:09:56.935 { 00:09:56.935 "dma_device_id": "system", 00:09:56.935 "dma_device_type": 1 00:09:56.935 }, 00:09:56.935 { 00:09:56.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.935 "dma_device_type": 2 00:09:56.935 } 00:09:56.935 ], 00:09:56.935 "driver_specific": {} 00:09:56.936 } 00:09:56.936 ] 00:09:56.936 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.936 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:56.936 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:56.936 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:56.936 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:56.936 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.936 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.936 BaseBdev3 00:09:56.936 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.936 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:56.936 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:56.936 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:56.936 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:56.936 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:56.936 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:56.936 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:56.936 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.936 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.936 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.936 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:56.936 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.936 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.936 [ 00:09:56.936 { 00:09:56.936 "name": "BaseBdev3", 00:09:56.936 "aliases": [ 00:09:56.936 "307bebfc-da08-4594-8636-09a65a16248e" 00:09:56.936 ], 00:09:56.936 "product_name": "Malloc disk", 00:09:56.936 "block_size": 512, 00:09:56.936 "num_blocks": 65536, 00:09:56.936 "uuid": "307bebfc-da08-4594-8636-09a65a16248e", 00:09:56.936 "assigned_rate_limits": { 00:09:56.936 "rw_ios_per_sec": 0, 00:09:56.936 "rw_mbytes_per_sec": 0, 00:09:56.936 "r_mbytes_per_sec": 0, 00:09:56.936 "w_mbytes_per_sec": 0 00:09:56.936 }, 00:09:56.936 "claimed": false, 00:09:56.936 "zoned": false, 00:09:56.936 "supported_io_types": { 00:09:56.936 "read": true, 00:09:56.936 "write": true, 00:09:56.936 "unmap": true, 00:09:56.936 "flush": true, 00:09:56.936 "reset": true, 00:09:56.936 "nvme_admin": false, 00:09:56.936 "nvme_io": false, 00:09:56.936 "nvme_io_md": false, 00:09:56.936 "write_zeroes": true, 00:09:56.936 "zcopy": true, 00:09:56.936 "get_zone_info": false, 00:09:56.936 "zone_management": false, 00:09:56.936 "zone_append": false, 00:09:56.936 "compare": false, 00:09:56.936 "compare_and_write": false, 00:09:56.936 "abort": true, 00:09:56.936 "seek_hole": false, 00:09:56.936 "seek_data": false, 00:09:56.936 "copy": true, 00:09:56.936 "nvme_iov_md": false 00:09:56.936 }, 00:09:56.936 "memory_domains": [ 00:09:56.936 { 00:09:56.936 "dma_device_id": "system", 00:09:56.936 "dma_device_type": 1 00:09:56.936 }, 00:09:56.936 { 00:09:56.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.936 "dma_device_type": 2 00:09:56.936 } 00:09:56.936 ], 00:09:56.936 "driver_specific": {} 00:09:56.936 } 00:09:56.936 ] 00:09:56.936 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.936 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:56.936 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:56.936 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:56.936 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:56.936 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.936 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.196 BaseBdev4 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.196 [ 00:09:57.196 { 00:09:57.196 "name": "BaseBdev4", 00:09:57.196 "aliases": [ 00:09:57.196 "642fe61d-3cfc-406e-9ef8-e81ab789034d" 00:09:57.196 ], 00:09:57.196 "product_name": "Malloc disk", 00:09:57.196 "block_size": 512, 00:09:57.196 "num_blocks": 65536, 00:09:57.196 "uuid": "642fe61d-3cfc-406e-9ef8-e81ab789034d", 00:09:57.196 "assigned_rate_limits": { 00:09:57.196 "rw_ios_per_sec": 0, 00:09:57.196 "rw_mbytes_per_sec": 0, 00:09:57.196 "r_mbytes_per_sec": 0, 00:09:57.196 "w_mbytes_per_sec": 0 00:09:57.196 }, 00:09:57.196 "claimed": false, 00:09:57.196 "zoned": false, 00:09:57.196 "supported_io_types": { 00:09:57.196 "read": true, 00:09:57.196 "write": true, 00:09:57.196 "unmap": true, 00:09:57.196 "flush": true, 00:09:57.196 "reset": true, 00:09:57.196 "nvme_admin": false, 00:09:57.196 "nvme_io": false, 00:09:57.196 "nvme_io_md": false, 00:09:57.196 "write_zeroes": true, 00:09:57.196 "zcopy": true, 00:09:57.196 "get_zone_info": false, 00:09:57.196 "zone_management": false, 00:09:57.196 "zone_append": false, 00:09:57.196 "compare": false, 00:09:57.196 "compare_and_write": false, 00:09:57.196 "abort": true, 00:09:57.196 "seek_hole": false, 00:09:57.196 "seek_data": false, 00:09:57.196 "copy": true, 00:09:57.196 "nvme_iov_md": false 00:09:57.196 }, 00:09:57.196 "memory_domains": [ 00:09:57.196 { 00:09:57.196 "dma_device_id": "system", 00:09:57.196 "dma_device_type": 1 00:09:57.196 }, 00:09:57.196 { 00:09:57.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.196 "dma_device_type": 2 00:09:57.196 } 00:09:57.196 ], 00:09:57.196 "driver_specific": {} 00:09:57.196 } 00:09:57.196 ] 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.196 [2024-11-19 12:30:02.238011] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:57.196 [2024-11-19 12:30:02.238054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:57.196 [2024-11-19 12:30:02.238075] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:57.196 [2024-11-19 12:30:02.239864] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:57.196 [2024-11-19 12:30:02.239916] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.196 "name": "Existed_Raid", 00:09:57.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.196 "strip_size_kb": 64, 00:09:57.196 "state": "configuring", 00:09:57.196 "raid_level": "raid0", 00:09:57.196 "superblock": false, 00:09:57.196 "num_base_bdevs": 4, 00:09:57.196 "num_base_bdevs_discovered": 3, 00:09:57.196 "num_base_bdevs_operational": 4, 00:09:57.196 "base_bdevs_list": [ 00:09:57.196 { 00:09:57.196 "name": "BaseBdev1", 00:09:57.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.196 "is_configured": false, 00:09:57.196 "data_offset": 0, 00:09:57.196 "data_size": 0 00:09:57.196 }, 00:09:57.196 { 00:09:57.196 "name": "BaseBdev2", 00:09:57.196 "uuid": "21eef390-0a9f-433d-83ed-9c70f979971d", 00:09:57.196 "is_configured": true, 00:09:57.196 "data_offset": 0, 00:09:57.196 "data_size": 65536 00:09:57.196 }, 00:09:57.196 { 00:09:57.196 "name": "BaseBdev3", 00:09:57.196 "uuid": "307bebfc-da08-4594-8636-09a65a16248e", 00:09:57.196 "is_configured": true, 00:09:57.196 "data_offset": 0, 00:09:57.196 "data_size": 65536 00:09:57.196 }, 00:09:57.196 { 00:09:57.196 "name": "BaseBdev4", 00:09:57.196 "uuid": "642fe61d-3cfc-406e-9ef8-e81ab789034d", 00:09:57.196 "is_configured": true, 00:09:57.196 "data_offset": 0, 00:09:57.196 "data_size": 65536 00:09:57.196 } 00:09:57.196 ] 00:09:57.196 }' 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.196 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.456 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:57.456 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.456 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.456 [2024-11-19 12:30:02.705246] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:57.456 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.456 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:57.456 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.456 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.456 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.456 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.456 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.456 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.456 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.456 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.456 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.715 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.715 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.715 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.715 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.715 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.715 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.715 "name": "Existed_Raid", 00:09:57.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.715 "strip_size_kb": 64, 00:09:57.715 "state": "configuring", 00:09:57.715 "raid_level": "raid0", 00:09:57.715 "superblock": false, 00:09:57.715 "num_base_bdevs": 4, 00:09:57.715 "num_base_bdevs_discovered": 2, 00:09:57.715 "num_base_bdevs_operational": 4, 00:09:57.715 "base_bdevs_list": [ 00:09:57.715 { 00:09:57.715 "name": "BaseBdev1", 00:09:57.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.715 "is_configured": false, 00:09:57.715 "data_offset": 0, 00:09:57.715 "data_size": 0 00:09:57.715 }, 00:09:57.715 { 00:09:57.715 "name": null, 00:09:57.715 "uuid": "21eef390-0a9f-433d-83ed-9c70f979971d", 00:09:57.715 "is_configured": false, 00:09:57.715 "data_offset": 0, 00:09:57.715 "data_size": 65536 00:09:57.716 }, 00:09:57.716 { 00:09:57.716 "name": "BaseBdev3", 00:09:57.716 "uuid": "307bebfc-da08-4594-8636-09a65a16248e", 00:09:57.716 "is_configured": true, 00:09:57.716 "data_offset": 0, 00:09:57.716 "data_size": 65536 00:09:57.716 }, 00:09:57.716 { 00:09:57.716 "name": "BaseBdev4", 00:09:57.716 "uuid": "642fe61d-3cfc-406e-9ef8-e81ab789034d", 00:09:57.716 "is_configured": true, 00:09:57.716 "data_offset": 0, 00:09:57.716 "data_size": 65536 00:09:57.716 } 00:09:57.716 ] 00:09:57.716 }' 00:09:57.716 12:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.716 12:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.975 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.975 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.975 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.975 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:57.975 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.975 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:57.975 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:57.975 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.975 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.975 [2024-11-19 12:30:03.211448] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:57.975 BaseBdev1 00:09:57.975 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.975 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:57.975 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:57.975 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:57.975 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:57.975 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:57.975 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:57.975 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:57.975 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.975 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.975 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.975 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:57.976 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.976 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.235 [ 00:09:58.235 { 00:09:58.235 "name": "BaseBdev1", 00:09:58.235 "aliases": [ 00:09:58.235 "f725f4a3-a2e8-4812-b65e-182dd7346e53" 00:09:58.235 ], 00:09:58.235 "product_name": "Malloc disk", 00:09:58.235 "block_size": 512, 00:09:58.235 "num_blocks": 65536, 00:09:58.235 "uuid": "f725f4a3-a2e8-4812-b65e-182dd7346e53", 00:09:58.235 "assigned_rate_limits": { 00:09:58.235 "rw_ios_per_sec": 0, 00:09:58.235 "rw_mbytes_per_sec": 0, 00:09:58.235 "r_mbytes_per_sec": 0, 00:09:58.235 "w_mbytes_per_sec": 0 00:09:58.235 }, 00:09:58.235 "claimed": true, 00:09:58.235 "claim_type": "exclusive_write", 00:09:58.235 "zoned": false, 00:09:58.235 "supported_io_types": { 00:09:58.235 "read": true, 00:09:58.235 "write": true, 00:09:58.235 "unmap": true, 00:09:58.235 "flush": true, 00:09:58.235 "reset": true, 00:09:58.235 "nvme_admin": false, 00:09:58.235 "nvme_io": false, 00:09:58.235 "nvme_io_md": false, 00:09:58.235 "write_zeroes": true, 00:09:58.235 "zcopy": true, 00:09:58.235 "get_zone_info": false, 00:09:58.235 "zone_management": false, 00:09:58.235 "zone_append": false, 00:09:58.235 "compare": false, 00:09:58.235 "compare_and_write": false, 00:09:58.235 "abort": true, 00:09:58.235 "seek_hole": false, 00:09:58.235 "seek_data": false, 00:09:58.235 "copy": true, 00:09:58.235 "nvme_iov_md": false 00:09:58.235 }, 00:09:58.235 "memory_domains": [ 00:09:58.235 { 00:09:58.235 "dma_device_id": "system", 00:09:58.235 "dma_device_type": 1 00:09:58.235 }, 00:09:58.235 { 00:09:58.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.235 "dma_device_type": 2 00:09:58.235 } 00:09:58.235 ], 00:09:58.235 "driver_specific": {} 00:09:58.235 } 00:09:58.235 ] 00:09:58.235 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.235 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:58.235 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:58.235 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.235 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.235 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.235 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.235 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.235 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.235 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.235 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.235 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.235 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.235 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.235 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.235 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.235 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.235 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.235 "name": "Existed_Raid", 00:09:58.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.235 "strip_size_kb": 64, 00:09:58.235 "state": "configuring", 00:09:58.235 "raid_level": "raid0", 00:09:58.235 "superblock": false, 00:09:58.235 "num_base_bdevs": 4, 00:09:58.235 "num_base_bdevs_discovered": 3, 00:09:58.235 "num_base_bdevs_operational": 4, 00:09:58.235 "base_bdevs_list": [ 00:09:58.235 { 00:09:58.235 "name": "BaseBdev1", 00:09:58.235 "uuid": "f725f4a3-a2e8-4812-b65e-182dd7346e53", 00:09:58.235 "is_configured": true, 00:09:58.235 "data_offset": 0, 00:09:58.235 "data_size": 65536 00:09:58.235 }, 00:09:58.235 { 00:09:58.235 "name": null, 00:09:58.235 "uuid": "21eef390-0a9f-433d-83ed-9c70f979971d", 00:09:58.235 "is_configured": false, 00:09:58.235 "data_offset": 0, 00:09:58.235 "data_size": 65536 00:09:58.235 }, 00:09:58.235 { 00:09:58.235 "name": "BaseBdev3", 00:09:58.235 "uuid": "307bebfc-da08-4594-8636-09a65a16248e", 00:09:58.235 "is_configured": true, 00:09:58.235 "data_offset": 0, 00:09:58.235 "data_size": 65536 00:09:58.235 }, 00:09:58.235 { 00:09:58.235 "name": "BaseBdev4", 00:09:58.235 "uuid": "642fe61d-3cfc-406e-9ef8-e81ab789034d", 00:09:58.235 "is_configured": true, 00:09:58.235 "data_offset": 0, 00:09:58.235 "data_size": 65536 00:09:58.235 } 00:09:58.235 ] 00:09:58.235 }' 00:09:58.235 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.235 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.495 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.495 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.495 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.495 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:58.495 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.495 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:58.495 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:58.495 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.495 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.495 [2024-11-19 12:30:03.698694] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:58.495 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.495 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:58.495 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.495 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.495 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.495 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.495 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.495 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.495 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.495 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.495 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.495 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.495 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.495 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.495 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.495 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.755 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.755 "name": "Existed_Raid", 00:09:58.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.755 "strip_size_kb": 64, 00:09:58.755 "state": "configuring", 00:09:58.755 "raid_level": "raid0", 00:09:58.755 "superblock": false, 00:09:58.755 "num_base_bdevs": 4, 00:09:58.755 "num_base_bdevs_discovered": 2, 00:09:58.755 "num_base_bdevs_operational": 4, 00:09:58.755 "base_bdevs_list": [ 00:09:58.755 { 00:09:58.755 "name": "BaseBdev1", 00:09:58.755 "uuid": "f725f4a3-a2e8-4812-b65e-182dd7346e53", 00:09:58.755 "is_configured": true, 00:09:58.755 "data_offset": 0, 00:09:58.755 "data_size": 65536 00:09:58.755 }, 00:09:58.755 { 00:09:58.755 "name": null, 00:09:58.755 "uuid": "21eef390-0a9f-433d-83ed-9c70f979971d", 00:09:58.755 "is_configured": false, 00:09:58.755 "data_offset": 0, 00:09:58.755 "data_size": 65536 00:09:58.755 }, 00:09:58.755 { 00:09:58.755 "name": null, 00:09:58.755 "uuid": "307bebfc-da08-4594-8636-09a65a16248e", 00:09:58.755 "is_configured": false, 00:09:58.755 "data_offset": 0, 00:09:58.755 "data_size": 65536 00:09:58.755 }, 00:09:58.755 { 00:09:58.755 "name": "BaseBdev4", 00:09:58.755 "uuid": "642fe61d-3cfc-406e-9ef8-e81ab789034d", 00:09:58.755 "is_configured": true, 00:09:58.755 "data_offset": 0, 00:09:58.755 "data_size": 65536 00:09:58.755 } 00:09:58.755 ] 00:09:58.755 }' 00:09:58.755 12:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.755 12:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.014 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.014 12:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.014 12:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.014 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:59.014 12:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.014 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:59.014 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:59.014 12:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.014 12:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.014 [2024-11-19 12:30:04.209948] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:59.014 12:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.014 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:59.014 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.014 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.014 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.014 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.014 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.014 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.014 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.014 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.014 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.014 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.014 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.014 12:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.014 12:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.014 12:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.014 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.014 "name": "Existed_Raid", 00:09:59.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.014 "strip_size_kb": 64, 00:09:59.014 "state": "configuring", 00:09:59.014 "raid_level": "raid0", 00:09:59.014 "superblock": false, 00:09:59.014 "num_base_bdevs": 4, 00:09:59.014 "num_base_bdevs_discovered": 3, 00:09:59.014 "num_base_bdevs_operational": 4, 00:09:59.014 "base_bdevs_list": [ 00:09:59.014 { 00:09:59.014 "name": "BaseBdev1", 00:09:59.014 "uuid": "f725f4a3-a2e8-4812-b65e-182dd7346e53", 00:09:59.014 "is_configured": true, 00:09:59.014 "data_offset": 0, 00:09:59.014 "data_size": 65536 00:09:59.014 }, 00:09:59.014 { 00:09:59.014 "name": null, 00:09:59.014 "uuid": "21eef390-0a9f-433d-83ed-9c70f979971d", 00:09:59.014 "is_configured": false, 00:09:59.014 "data_offset": 0, 00:09:59.014 "data_size": 65536 00:09:59.014 }, 00:09:59.014 { 00:09:59.014 "name": "BaseBdev3", 00:09:59.014 "uuid": "307bebfc-da08-4594-8636-09a65a16248e", 00:09:59.014 "is_configured": true, 00:09:59.014 "data_offset": 0, 00:09:59.014 "data_size": 65536 00:09:59.014 }, 00:09:59.014 { 00:09:59.014 "name": "BaseBdev4", 00:09:59.014 "uuid": "642fe61d-3cfc-406e-9ef8-e81ab789034d", 00:09:59.014 "is_configured": true, 00:09:59.014 "data_offset": 0, 00:09:59.014 "data_size": 65536 00:09:59.014 } 00:09:59.014 ] 00:09:59.014 }' 00:09:59.014 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.014 12:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.584 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:59.584 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.584 12:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.584 12:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.584 12:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.584 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:59.584 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:59.584 12:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.584 12:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.584 [2024-11-19 12:30:04.697127] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:59.584 12:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.584 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:59.584 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.584 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.584 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.584 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.584 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.584 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.584 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.585 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.585 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.585 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.585 12:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.585 12:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.585 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.585 12:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.585 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.585 "name": "Existed_Raid", 00:09:59.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.585 "strip_size_kb": 64, 00:09:59.585 "state": "configuring", 00:09:59.585 "raid_level": "raid0", 00:09:59.585 "superblock": false, 00:09:59.585 "num_base_bdevs": 4, 00:09:59.585 "num_base_bdevs_discovered": 2, 00:09:59.585 "num_base_bdevs_operational": 4, 00:09:59.585 "base_bdevs_list": [ 00:09:59.585 { 00:09:59.585 "name": null, 00:09:59.585 "uuid": "f725f4a3-a2e8-4812-b65e-182dd7346e53", 00:09:59.585 "is_configured": false, 00:09:59.585 "data_offset": 0, 00:09:59.585 "data_size": 65536 00:09:59.585 }, 00:09:59.585 { 00:09:59.585 "name": null, 00:09:59.585 "uuid": "21eef390-0a9f-433d-83ed-9c70f979971d", 00:09:59.585 "is_configured": false, 00:09:59.585 "data_offset": 0, 00:09:59.585 "data_size": 65536 00:09:59.585 }, 00:09:59.585 { 00:09:59.585 "name": "BaseBdev3", 00:09:59.585 "uuid": "307bebfc-da08-4594-8636-09a65a16248e", 00:09:59.585 "is_configured": true, 00:09:59.585 "data_offset": 0, 00:09:59.585 "data_size": 65536 00:09:59.585 }, 00:09:59.585 { 00:09:59.585 "name": "BaseBdev4", 00:09:59.585 "uuid": "642fe61d-3cfc-406e-9ef8-e81ab789034d", 00:09:59.585 "is_configured": true, 00:09:59.585 "data_offset": 0, 00:09:59.585 "data_size": 65536 00:09:59.585 } 00:09:59.585 ] 00:09:59.585 }' 00:09:59.585 12:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.585 12:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.154 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.154 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.154 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.154 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:00.154 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.154 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:00.154 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:00.154 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.154 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.154 [2024-11-19 12:30:05.174897] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:00.154 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.154 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:00.154 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.154 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.154 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.154 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.154 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.154 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.154 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.154 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.154 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.154 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.155 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.155 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.155 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.155 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.155 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.155 "name": "Existed_Raid", 00:10:00.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.155 "strip_size_kb": 64, 00:10:00.155 "state": "configuring", 00:10:00.155 "raid_level": "raid0", 00:10:00.155 "superblock": false, 00:10:00.155 "num_base_bdevs": 4, 00:10:00.155 "num_base_bdevs_discovered": 3, 00:10:00.155 "num_base_bdevs_operational": 4, 00:10:00.155 "base_bdevs_list": [ 00:10:00.155 { 00:10:00.155 "name": null, 00:10:00.155 "uuid": "f725f4a3-a2e8-4812-b65e-182dd7346e53", 00:10:00.155 "is_configured": false, 00:10:00.155 "data_offset": 0, 00:10:00.155 "data_size": 65536 00:10:00.155 }, 00:10:00.155 { 00:10:00.155 "name": "BaseBdev2", 00:10:00.155 "uuid": "21eef390-0a9f-433d-83ed-9c70f979971d", 00:10:00.155 "is_configured": true, 00:10:00.155 "data_offset": 0, 00:10:00.155 "data_size": 65536 00:10:00.155 }, 00:10:00.155 { 00:10:00.155 "name": "BaseBdev3", 00:10:00.155 "uuid": "307bebfc-da08-4594-8636-09a65a16248e", 00:10:00.155 "is_configured": true, 00:10:00.155 "data_offset": 0, 00:10:00.155 "data_size": 65536 00:10:00.155 }, 00:10:00.155 { 00:10:00.155 "name": "BaseBdev4", 00:10:00.155 "uuid": "642fe61d-3cfc-406e-9ef8-e81ab789034d", 00:10:00.155 "is_configured": true, 00:10:00.155 "data_offset": 0, 00:10:00.155 "data_size": 65536 00:10:00.155 } 00:10:00.155 ] 00:10:00.155 }' 00:10:00.155 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.155 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.415 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.415 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.415 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:00.415 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.415 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.675 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:00.675 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.675 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:00.675 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.675 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.675 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.675 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f725f4a3-a2e8-4812-b65e-182dd7346e53 00:10:00.675 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.675 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.675 [2024-11-19 12:30:05.752793] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:00.675 [2024-11-19 12:30:05.752838] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:00.675 [2024-11-19 12:30:05.752845] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:00.675 [2024-11-19 12:30:05.753110] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:00.675 [2024-11-19 12:30:05.753239] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:00.675 [2024-11-19 12:30:05.753252] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:00.675 [2024-11-19 12:30:05.753412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.675 NewBaseBdev 00:10:00.675 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.675 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:00.675 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:00.675 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:00.675 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:00.675 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:00.675 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:00.675 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:00.675 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.675 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.675 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.675 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:00.675 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.675 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.675 [ 00:10:00.675 { 00:10:00.675 "name": "NewBaseBdev", 00:10:00.675 "aliases": [ 00:10:00.675 "f725f4a3-a2e8-4812-b65e-182dd7346e53" 00:10:00.675 ], 00:10:00.675 "product_name": "Malloc disk", 00:10:00.675 "block_size": 512, 00:10:00.675 "num_blocks": 65536, 00:10:00.675 "uuid": "f725f4a3-a2e8-4812-b65e-182dd7346e53", 00:10:00.675 "assigned_rate_limits": { 00:10:00.675 "rw_ios_per_sec": 0, 00:10:00.676 "rw_mbytes_per_sec": 0, 00:10:00.676 "r_mbytes_per_sec": 0, 00:10:00.676 "w_mbytes_per_sec": 0 00:10:00.676 }, 00:10:00.676 "claimed": true, 00:10:00.676 "claim_type": "exclusive_write", 00:10:00.676 "zoned": false, 00:10:00.676 "supported_io_types": { 00:10:00.676 "read": true, 00:10:00.676 "write": true, 00:10:00.676 "unmap": true, 00:10:00.676 "flush": true, 00:10:00.676 "reset": true, 00:10:00.676 "nvme_admin": false, 00:10:00.676 "nvme_io": false, 00:10:00.676 "nvme_io_md": false, 00:10:00.676 "write_zeroes": true, 00:10:00.676 "zcopy": true, 00:10:00.676 "get_zone_info": false, 00:10:00.676 "zone_management": false, 00:10:00.676 "zone_append": false, 00:10:00.676 "compare": false, 00:10:00.676 "compare_and_write": false, 00:10:00.676 "abort": true, 00:10:00.676 "seek_hole": false, 00:10:00.676 "seek_data": false, 00:10:00.676 "copy": true, 00:10:00.676 "nvme_iov_md": false 00:10:00.676 }, 00:10:00.676 "memory_domains": [ 00:10:00.676 { 00:10:00.676 "dma_device_id": "system", 00:10:00.676 "dma_device_type": 1 00:10:00.676 }, 00:10:00.676 { 00:10:00.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.676 "dma_device_type": 2 00:10:00.676 } 00:10:00.676 ], 00:10:00.676 "driver_specific": {} 00:10:00.676 } 00:10:00.676 ] 00:10:00.676 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.676 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:00.676 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:00.676 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.676 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.676 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.676 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.676 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.676 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.676 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.676 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.676 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.676 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.676 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.676 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.676 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.676 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.676 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.676 "name": "Existed_Raid", 00:10:00.676 "uuid": "ff8c8392-7606-4c85-af1c-ba68c050d3ba", 00:10:00.676 "strip_size_kb": 64, 00:10:00.676 "state": "online", 00:10:00.676 "raid_level": "raid0", 00:10:00.676 "superblock": false, 00:10:00.676 "num_base_bdevs": 4, 00:10:00.676 "num_base_bdevs_discovered": 4, 00:10:00.676 "num_base_bdevs_operational": 4, 00:10:00.676 "base_bdevs_list": [ 00:10:00.676 { 00:10:00.676 "name": "NewBaseBdev", 00:10:00.676 "uuid": "f725f4a3-a2e8-4812-b65e-182dd7346e53", 00:10:00.676 "is_configured": true, 00:10:00.676 "data_offset": 0, 00:10:00.676 "data_size": 65536 00:10:00.676 }, 00:10:00.676 { 00:10:00.676 "name": "BaseBdev2", 00:10:00.676 "uuid": "21eef390-0a9f-433d-83ed-9c70f979971d", 00:10:00.676 "is_configured": true, 00:10:00.676 "data_offset": 0, 00:10:00.676 "data_size": 65536 00:10:00.676 }, 00:10:00.676 { 00:10:00.676 "name": "BaseBdev3", 00:10:00.676 "uuid": "307bebfc-da08-4594-8636-09a65a16248e", 00:10:00.676 "is_configured": true, 00:10:00.676 "data_offset": 0, 00:10:00.676 "data_size": 65536 00:10:00.676 }, 00:10:00.676 { 00:10:00.676 "name": "BaseBdev4", 00:10:00.676 "uuid": "642fe61d-3cfc-406e-9ef8-e81ab789034d", 00:10:00.676 "is_configured": true, 00:10:00.676 "data_offset": 0, 00:10:00.676 "data_size": 65536 00:10:00.676 } 00:10:00.676 ] 00:10:00.676 }' 00:10:00.676 12:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.676 12:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.246 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:01.246 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:01.246 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:01.246 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:01.246 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:01.246 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:01.246 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:01.246 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:01.246 12:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.246 12:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.246 [2024-11-19 12:30:06.212380] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.246 12:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.246 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:01.246 "name": "Existed_Raid", 00:10:01.246 "aliases": [ 00:10:01.246 "ff8c8392-7606-4c85-af1c-ba68c050d3ba" 00:10:01.246 ], 00:10:01.246 "product_name": "Raid Volume", 00:10:01.246 "block_size": 512, 00:10:01.246 "num_blocks": 262144, 00:10:01.246 "uuid": "ff8c8392-7606-4c85-af1c-ba68c050d3ba", 00:10:01.246 "assigned_rate_limits": { 00:10:01.246 "rw_ios_per_sec": 0, 00:10:01.246 "rw_mbytes_per_sec": 0, 00:10:01.246 "r_mbytes_per_sec": 0, 00:10:01.246 "w_mbytes_per_sec": 0 00:10:01.246 }, 00:10:01.246 "claimed": false, 00:10:01.246 "zoned": false, 00:10:01.246 "supported_io_types": { 00:10:01.246 "read": true, 00:10:01.246 "write": true, 00:10:01.246 "unmap": true, 00:10:01.246 "flush": true, 00:10:01.246 "reset": true, 00:10:01.246 "nvme_admin": false, 00:10:01.246 "nvme_io": false, 00:10:01.246 "nvme_io_md": false, 00:10:01.246 "write_zeroes": true, 00:10:01.246 "zcopy": false, 00:10:01.246 "get_zone_info": false, 00:10:01.246 "zone_management": false, 00:10:01.246 "zone_append": false, 00:10:01.246 "compare": false, 00:10:01.246 "compare_and_write": false, 00:10:01.246 "abort": false, 00:10:01.246 "seek_hole": false, 00:10:01.246 "seek_data": false, 00:10:01.246 "copy": false, 00:10:01.246 "nvme_iov_md": false 00:10:01.246 }, 00:10:01.246 "memory_domains": [ 00:10:01.246 { 00:10:01.246 "dma_device_id": "system", 00:10:01.246 "dma_device_type": 1 00:10:01.246 }, 00:10:01.246 { 00:10:01.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.246 "dma_device_type": 2 00:10:01.246 }, 00:10:01.246 { 00:10:01.246 "dma_device_id": "system", 00:10:01.246 "dma_device_type": 1 00:10:01.246 }, 00:10:01.246 { 00:10:01.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.246 "dma_device_type": 2 00:10:01.246 }, 00:10:01.246 { 00:10:01.246 "dma_device_id": "system", 00:10:01.246 "dma_device_type": 1 00:10:01.246 }, 00:10:01.246 { 00:10:01.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.246 "dma_device_type": 2 00:10:01.246 }, 00:10:01.246 { 00:10:01.246 "dma_device_id": "system", 00:10:01.246 "dma_device_type": 1 00:10:01.246 }, 00:10:01.246 { 00:10:01.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.246 "dma_device_type": 2 00:10:01.246 } 00:10:01.246 ], 00:10:01.246 "driver_specific": { 00:10:01.246 "raid": { 00:10:01.246 "uuid": "ff8c8392-7606-4c85-af1c-ba68c050d3ba", 00:10:01.246 "strip_size_kb": 64, 00:10:01.246 "state": "online", 00:10:01.246 "raid_level": "raid0", 00:10:01.246 "superblock": false, 00:10:01.246 "num_base_bdevs": 4, 00:10:01.246 "num_base_bdevs_discovered": 4, 00:10:01.246 "num_base_bdevs_operational": 4, 00:10:01.246 "base_bdevs_list": [ 00:10:01.246 { 00:10:01.246 "name": "NewBaseBdev", 00:10:01.246 "uuid": "f725f4a3-a2e8-4812-b65e-182dd7346e53", 00:10:01.246 "is_configured": true, 00:10:01.246 "data_offset": 0, 00:10:01.246 "data_size": 65536 00:10:01.246 }, 00:10:01.246 { 00:10:01.246 "name": "BaseBdev2", 00:10:01.246 "uuid": "21eef390-0a9f-433d-83ed-9c70f979971d", 00:10:01.246 "is_configured": true, 00:10:01.246 "data_offset": 0, 00:10:01.246 "data_size": 65536 00:10:01.246 }, 00:10:01.246 { 00:10:01.246 "name": "BaseBdev3", 00:10:01.246 "uuid": "307bebfc-da08-4594-8636-09a65a16248e", 00:10:01.246 "is_configured": true, 00:10:01.246 "data_offset": 0, 00:10:01.246 "data_size": 65536 00:10:01.246 }, 00:10:01.246 { 00:10:01.246 "name": "BaseBdev4", 00:10:01.246 "uuid": "642fe61d-3cfc-406e-9ef8-e81ab789034d", 00:10:01.246 "is_configured": true, 00:10:01.246 "data_offset": 0, 00:10:01.246 "data_size": 65536 00:10:01.246 } 00:10:01.246 ] 00:10:01.246 } 00:10:01.246 } 00:10:01.246 }' 00:10:01.246 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:01.246 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:01.246 BaseBdev2 00:10:01.246 BaseBdev3 00:10:01.246 BaseBdev4' 00:10:01.246 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.247 [2024-11-19 12:30:06.491552] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:01.247 [2024-11-19 12:30:06.491594] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:01.247 [2024-11-19 12:30:06.491680] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:01.247 [2024-11-19 12:30:06.491766] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:01.247 [2024-11-19 12:30:06.491795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80533 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 80533 ']' 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 80533 00:10:01.247 12:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:01.507 12:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:01.507 12:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80533 00:10:01.507 12:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:01.507 12:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:01.507 killing process with pid 80533 00:10:01.507 12:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80533' 00:10:01.507 12:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 80533 00:10:01.507 [2024-11-19 12:30:06.543118] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:01.507 12:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 80533 00:10:01.507 [2024-11-19 12:30:06.585366] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:01.767 12:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:01.767 00:10:01.767 real 0m9.615s 00:10:01.767 user 0m16.396s 00:10:01.767 sys 0m2.048s 00:10:01.767 12:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:01.767 12:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.767 ************************************ 00:10:01.767 END TEST raid_state_function_test 00:10:01.767 ************************************ 00:10:01.767 12:30:06 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:01.767 12:30:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:01.767 12:30:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:01.767 12:30:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:01.767 ************************************ 00:10:01.767 START TEST raid_state_function_test_sb 00:10:01.767 ************************************ 00:10:01.767 12:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:10:01.767 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:01.767 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:01.767 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:01.767 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:01.767 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:01.767 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.767 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:01.767 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.767 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.767 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:01.767 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.767 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.767 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:01.767 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.767 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.767 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:01.767 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.767 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.767 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:01.767 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:01.768 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:01.768 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:01.768 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:01.768 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:01.768 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:01.768 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:01.768 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:01.768 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:01.768 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:01.768 Process raid pid: 81182 00:10:01.768 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81182 00:10:01.768 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:01.768 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81182' 00:10:01.768 12:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81182 00:10:01.768 12:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 81182 ']' 00:10:01.768 12:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.768 12:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:01.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.768 12:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.768 12:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:01.768 12:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.768 [2024-11-19 12:30:07.014716] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:01.768 [2024-11-19 12:30:07.014884] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.028 [2024-11-19 12:30:07.183791] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.028 [2024-11-19 12:30:07.233622] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.028 [2024-11-19 12:30:07.275602] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.028 [2024-11-19 12:30:07.275640] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.598 12:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:02.598 12:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:02.598 12:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:02.598 12:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.598 12:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.857 [2024-11-19 12:30:07.860907] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:02.857 [2024-11-19 12:30:07.860960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:02.857 [2024-11-19 12:30:07.860972] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:02.857 [2024-11-19 12:30:07.860981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:02.857 [2024-11-19 12:30:07.860986] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:02.857 [2024-11-19 12:30:07.860998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:02.857 [2024-11-19 12:30:07.861005] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:02.857 [2024-11-19 12:30:07.861013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:02.857 12:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.857 12:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:02.857 12:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.857 12:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.857 12:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.857 12:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.857 12:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.857 12:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.857 12:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.857 12:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.857 12:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.857 12:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.857 12:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.857 12:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.857 12:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.857 12:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.857 12:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.857 "name": "Existed_Raid", 00:10:02.857 "uuid": "73c8939d-e901-4223-b712-a69ff02f86fe", 00:10:02.857 "strip_size_kb": 64, 00:10:02.857 "state": "configuring", 00:10:02.857 "raid_level": "raid0", 00:10:02.857 "superblock": true, 00:10:02.857 "num_base_bdevs": 4, 00:10:02.857 "num_base_bdevs_discovered": 0, 00:10:02.857 "num_base_bdevs_operational": 4, 00:10:02.857 "base_bdevs_list": [ 00:10:02.857 { 00:10:02.857 "name": "BaseBdev1", 00:10:02.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.857 "is_configured": false, 00:10:02.857 "data_offset": 0, 00:10:02.857 "data_size": 0 00:10:02.857 }, 00:10:02.857 { 00:10:02.857 "name": "BaseBdev2", 00:10:02.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.857 "is_configured": false, 00:10:02.857 "data_offset": 0, 00:10:02.857 "data_size": 0 00:10:02.857 }, 00:10:02.857 { 00:10:02.857 "name": "BaseBdev3", 00:10:02.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.857 "is_configured": false, 00:10:02.857 "data_offset": 0, 00:10:02.857 "data_size": 0 00:10:02.857 }, 00:10:02.857 { 00:10:02.857 "name": "BaseBdev4", 00:10:02.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.857 "is_configured": false, 00:10:02.857 "data_offset": 0, 00:10:02.857 "data_size": 0 00:10:02.857 } 00:10:02.857 ] 00:10:02.857 }' 00:10:02.857 12:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.857 12:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.117 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:03.117 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.117 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.117 [2024-11-19 12:30:08.355962] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:03.117 [2024-11-19 12:30:08.356017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:03.117 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.118 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:03.118 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.118 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.118 [2024-11-19 12:30:08.367969] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:03.118 [2024-11-19 12:30:08.368012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:03.118 [2024-11-19 12:30:08.368020] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:03.118 [2024-11-19 12:30:08.368030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:03.118 [2024-11-19 12:30:08.368036] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:03.118 [2024-11-19 12:30:08.368044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:03.118 [2024-11-19 12:30:08.368050] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:03.118 [2024-11-19 12:30:08.368059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:03.118 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.118 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:03.118 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.118 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.378 [2024-11-19 12:30:08.388786] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:03.378 BaseBdev1 00:10:03.378 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.378 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:03.378 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:03.378 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:03.378 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:03.378 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:03.378 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:03.378 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:03.378 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.378 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.378 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.378 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:03.378 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.378 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.378 [ 00:10:03.378 { 00:10:03.378 "name": "BaseBdev1", 00:10:03.378 "aliases": [ 00:10:03.378 "71aa98db-f85c-4938-95fa-35154cc69d4d" 00:10:03.378 ], 00:10:03.378 "product_name": "Malloc disk", 00:10:03.378 "block_size": 512, 00:10:03.378 "num_blocks": 65536, 00:10:03.378 "uuid": "71aa98db-f85c-4938-95fa-35154cc69d4d", 00:10:03.378 "assigned_rate_limits": { 00:10:03.378 "rw_ios_per_sec": 0, 00:10:03.378 "rw_mbytes_per_sec": 0, 00:10:03.378 "r_mbytes_per_sec": 0, 00:10:03.378 "w_mbytes_per_sec": 0 00:10:03.378 }, 00:10:03.378 "claimed": true, 00:10:03.378 "claim_type": "exclusive_write", 00:10:03.378 "zoned": false, 00:10:03.378 "supported_io_types": { 00:10:03.378 "read": true, 00:10:03.378 "write": true, 00:10:03.378 "unmap": true, 00:10:03.378 "flush": true, 00:10:03.378 "reset": true, 00:10:03.378 "nvme_admin": false, 00:10:03.378 "nvme_io": false, 00:10:03.378 "nvme_io_md": false, 00:10:03.378 "write_zeroes": true, 00:10:03.378 "zcopy": true, 00:10:03.378 "get_zone_info": false, 00:10:03.378 "zone_management": false, 00:10:03.378 "zone_append": false, 00:10:03.378 "compare": false, 00:10:03.378 "compare_and_write": false, 00:10:03.378 "abort": true, 00:10:03.378 "seek_hole": false, 00:10:03.378 "seek_data": false, 00:10:03.378 "copy": true, 00:10:03.378 "nvme_iov_md": false 00:10:03.378 }, 00:10:03.378 "memory_domains": [ 00:10:03.378 { 00:10:03.378 "dma_device_id": "system", 00:10:03.378 "dma_device_type": 1 00:10:03.378 }, 00:10:03.378 { 00:10:03.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.378 "dma_device_type": 2 00:10:03.378 } 00:10:03.378 ], 00:10:03.378 "driver_specific": {} 00:10:03.378 } 00:10:03.378 ] 00:10:03.378 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.378 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:03.378 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:03.378 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.378 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.378 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.378 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.378 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.378 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.378 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.378 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.378 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.378 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.379 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.379 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.379 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.379 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.379 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.379 "name": "Existed_Raid", 00:10:03.379 "uuid": "40a786b3-4a7c-48e8-acc8-1ad76a43195f", 00:10:03.379 "strip_size_kb": 64, 00:10:03.379 "state": "configuring", 00:10:03.379 "raid_level": "raid0", 00:10:03.379 "superblock": true, 00:10:03.379 "num_base_bdevs": 4, 00:10:03.379 "num_base_bdevs_discovered": 1, 00:10:03.379 "num_base_bdevs_operational": 4, 00:10:03.379 "base_bdevs_list": [ 00:10:03.379 { 00:10:03.379 "name": "BaseBdev1", 00:10:03.379 "uuid": "71aa98db-f85c-4938-95fa-35154cc69d4d", 00:10:03.379 "is_configured": true, 00:10:03.379 "data_offset": 2048, 00:10:03.379 "data_size": 63488 00:10:03.379 }, 00:10:03.379 { 00:10:03.379 "name": "BaseBdev2", 00:10:03.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.379 "is_configured": false, 00:10:03.379 "data_offset": 0, 00:10:03.379 "data_size": 0 00:10:03.379 }, 00:10:03.379 { 00:10:03.379 "name": "BaseBdev3", 00:10:03.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.379 "is_configured": false, 00:10:03.379 "data_offset": 0, 00:10:03.379 "data_size": 0 00:10:03.379 }, 00:10:03.379 { 00:10:03.379 "name": "BaseBdev4", 00:10:03.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.379 "is_configured": false, 00:10:03.379 "data_offset": 0, 00:10:03.379 "data_size": 0 00:10:03.379 } 00:10:03.379 ] 00:10:03.379 }' 00:10:03.379 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.379 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.639 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:03.639 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.639 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.639 [2024-11-19 12:30:08.760216] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:03.639 [2024-11-19 12:30:08.760279] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:03.639 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.639 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:03.639 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.639 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.639 [2024-11-19 12:30:08.768233] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:03.639 [2024-11-19 12:30:08.770094] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:03.639 [2024-11-19 12:30:08.770132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:03.639 [2024-11-19 12:30:08.770141] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:03.639 [2024-11-19 12:30:08.770150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:03.639 [2024-11-19 12:30:08.770157] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:03.639 [2024-11-19 12:30:08.770164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:03.639 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.639 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:03.639 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:03.639 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:03.639 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.639 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.639 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.639 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.639 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.639 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.640 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.640 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.640 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.640 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.640 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.640 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.640 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.640 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.640 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.640 "name": "Existed_Raid", 00:10:03.640 "uuid": "f86d9fb0-51c2-4ad6-ba38-28de381a27c2", 00:10:03.640 "strip_size_kb": 64, 00:10:03.640 "state": "configuring", 00:10:03.640 "raid_level": "raid0", 00:10:03.640 "superblock": true, 00:10:03.640 "num_base_bdevs": 4, 00:10:03.640 "num_base_bdevs_discovered": 1, 00:10:03.640 "num_base_bdevs_operational": 4, 00:10:03.640 "base_bdevs_list": [ 00:10:03.640 { 00:10:03.640 "name": "BaseBdev1", 00:10:03.640 "uuid": "71aa98db-f85c-4938-95fa-35154cc69d4d", 00:10:03.640 "is_configured": true, 00:10:03.640 "data_offset": 2048, 00:10:03.640 "data_size": 63488 00:10:03.640 }, 00:10:03.640 { 00:10:03.640 "name": "BaseBdev2", 00:10:03.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.640 "is_configured": false, 00:10:03.640 "data_offset": 0, 00:10:03.640 "data_size": 0 00:10:03.640 }, 00:10:03.640 { 00:10:03.640 "name": "BaseBdev3", 00:10:03.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.640 "is_configured": false, 00:10:03.640 "data_offset": 0, 00:10:03.640 "data_size": 0 00:10:03.640 }, 00:10:03.640 { 00:10:03.640 "name": "BaseBdev4", 00:10:03.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.640 "is_configured": false, 00:10:03.640 "data_offset": 0, 00:10:03.640 "data_size": 0 00:10:03.640 } 00:10:03.640 ] 00:10:03.640 }' 00:10:03.640 12:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.640 12:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.208 [2024-11-19 12:30:09.267260] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:04.208 BaseBdev2 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.208 [ 00:10:04.208 { 00:10:04.208 "name": "BaseBdev2", 00:10:04.208 "aliases": [ 00:10:04.208 "cfa9cb03-4ecb-40d8-8fd5-ddf674ed60d8" 00:10:04.208 ], 00:10:04.208 "product_name": "Malloc disk", 00:10:04.208 "block_size": 512, 00:10:04.208 "num_blocks": 65536, 00:10:04.208 "uuid": "cfa9cb03-4ecb-40d8-8fd5-ddf674ed60d8", 00:10:04.208 "assigned_rate_limits": { 00:10:04.208 "rw_ios_per_sec": 0, 00:10:04.208 "rw_mbytes_per_sec": 0, 00:10:04.208 "r_mbytes_per_sec": 0, 00:10:04.208 "w_mbytes_per_sec": 0 00:10:04.208 }, 00:10:04.208 "claimed": true, 00:10:04.208 "claim_type": "exclusive_write", 00:10:04.208 "zoned": false, 00:10:04.208 "supported_io_types": { 00:10:04.208 "read": true, 00:10:04.208 "write": true, 00:10:04.208 "unmap": true, 00:10:04.208 "flush": true, 00:10:04.208 "reset": true, 00:10:04.208 "nvme_admin": false, 00:10:04.208 "nvme_io": false, 00:10:04.208 "nvme_io_md": false, 00:10:04.208 "write_zeroes": true, 00:10:04.208 "zcopy": true, 00:10:04.208 "get_zone_info": false, 00:10:04.208 "zone_management": false, 00:10:04.208 "zone_append": false, 00:10:04.208 "compare": false, 00:10:04.208 "compare_and_write": false, 00:10:04.208 "abort": true, 00:10:04.208 "seek_hole": false, 00:10:04.208 "seek_data": false, 00:10:04.208 "copy": true, 00:10:04.208 "nvme_iov_md": false 00:10:04.208 }, 00:10:04.208 "memory_domains": [ 00:10:04.208 { 00:10:04.208 "dma_device_id": "system", 00:10:04.208 "dma_device_type": 1 00:10:04.208 }, 00:10:04.208 { 00:10:04.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.208 "dma_device_type": 2 00:10:04.208 } 00:10:04.208 ], 00:10:04.208 "driver_specific": {} 00:10:04.208 } 00:10:04.208 ] 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.208 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.209 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.209 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.209 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.209 "name": "Existed_Raid", 00:10:04.209 "uuid": "f86d9fb0-51c2-4ad6-ba38-28de381a27c2", 00:10:04.209 "strip_size_kb": 64, 00:10:04.209 "state": "configuring", 00:10:04.209 "raid_level": "raid0", 00:10:04.209 "superblock": true, 00:10:04.209 "num_base_bdevs": 4, 00:10:04.209 "num_base_bdevs_discovered": 2, 00:10:04.209 "num_base_bdevs_operational": 4, 00:10:04.209 "base_bdevs_list": [ 00:10:04.209 { 00:10:04.209 "name": "BaseBdev1", 00:10:04.209 "uuid": "71aa98db-f85c-4938-95fa-35154cc69d4d", 00:10:04.209 "is_configured": true, 00:10:04.209 "data_offset": 2048, 00:10:04.209 "data_size": 63488 00:10:04.209 }, 00:10:04.209 { 00:10:04.209 "name": "BaseBdev2", 00:10:04.209 "uuid": "cfa9cb03-4ecb-40d8-8fd5-ddf674ed60d8", 00:10:04.209 "is_configured": true, 00:10:04.209 "data_offset": 2048, 00:10:04.209 "data_size": 63488 00:10:04.209 }, 00:10:04.209 { 00:10:04.209 "name": "BaseBdev3", 00:10:04.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.209 "is_configured": false, 00:10:04.209 "data_offset": 0, 00:10:04.209 "data_size": 0 00:10:04.209 }, 00:10:04.209 { 00:10:04.209 "name": "BaseBdev4", 00:10:04.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.209 "is_configured": false, 00:10:04.209 "data_offset": 0, 00:10:04.209 "data_size": 0 00:10:04.209 } 00:10:04.209 ] 00:10:04.209 }' 00:10:04.209 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.209 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.468 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:04.468 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.468 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.728 [2024-11-19 12:30:09.733694] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.728 BaseBdev3 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.728 [ 00:10:04.728 { 00:10:04.728 "name": "BaseBdev3", 00:10:04.728 "aliases": [ 00:10:04.728 "31197ab5-057f-4fb8-a515-a4d4fea1fcc9" 00:10:04.728 ], 00:10:04.728 "product_name": "Malloc disk", 00:10:04.728 "block_size": 512, 00:10:04.728 "num_blocks": 65536, 00:10:04.728 "uuid": "31197ab5-057f-4fb8-a515-a4d4fea1fcc9", 00:10:04.728 "assigned_rate_limits": { 00:10:04.728 "rw_ios_per_sec": 0, 00:10:04.728 "rw_mbytes_per_sec": 0, 00:10:04.728 "r_mbytes_per_sec": 0, 00:10:04.728 "w_mbytes_per_sec": 0 00:10:04.728 }, 00:10:04.728 "claimed": true, 00:10:04.728 "claim_type": "exclusive_write", 00:10:04.728 "zoned": false, 00:10:04.728 "supported_io_types": { 00:10:04.728 "read": true, 00:10:04.728 "write": true, 00:10:04.728 "unmap": true, 00:10:04.728 "flush": true, 00:10:04.728 "reset": true, 00:10:04.728 "nvme_admin": false, 00:10:04.728 "nvme_io": false, 00:10:04.728 "nvme_io_md": false, 00:10:04.728 "write_zeroes": true, 00:10:04.728 "zcopy": true, 00:10:04.728 "get_zone_info": false, 00:10:04.728 "zone_management": false, 00:10:04.728 "zone_append": false, 00:10:04.728 "compare": false, 00:10:04.728 "compare_and_write": false, 00:10:04.728 "abort": true, 00:10:04.728 "seek_hole": false, 00:10:04.728 "seek_data": false, 00:10:04.728 "copy": true, 00:10:04.728 "nvme_iov_md": false 00:10:04.728 }, 00:10:04.728 "memory_domains": [ 00:10:04.728 { 00:10:04.728 "dma_device_id": "system", 00:10:04.728 "dma_device_type": 1 00:10:04.728 }, 00:10:04.728 { 00:10:04.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.728 "dma_device_type": 2 00:10:04.728 } 00:10:04.728 ], 00:10:04.728 "driver_specific": {} 00:10:04.728 } 00:10:04.728 ] 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.728 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.728 "name": "Existed_Raid", 00:10:04.728 "uuid": "f86d9fb0-51c2-4ad6-ba38-28de381a27c2", 00:10:04.728 "strip_size_kb": 64, 00:10:04.728 "state": "configuring", 00:10:04.728 "raid_level": "raid0", 00:10:04.728 "superblock": true, 00:10:04.728 "num_base_bdevs": 4, 00:10:04.728 "num_base_bdevs_discovered": 3, 00:10:04.728 "num_base_bdevs_operational": 4, 00:10:04.728 "base_bdevs_list": [ 00:10:04.729 { 00:10:04.729 "name": "BaseBdev1", 00:10:04.729 "uuid": "71aa98db-f85c-4938-95fa-35154cc69d4d", 00:10:04.729 "is_configured": true, 00:10:04.729 "data_offset": 2048, 00:10:04.729 "data_size": 63488 00:10:04.729 }, 00:10:04.729 { 00:10:04.729 "name": "BaseBdev2", 00:10:04.729 "uuid": "cfa9cb03-4ecb-40d8-8fd5-ddf674ed60d8", 00:10:04.729 "is_configured": true, 00:10:04.729 "data_offset": 2048, 00:10:04.729 "data_size": 63488 00:10:04.729 }, 00:10:04.729 { 00:10:04.729 "name": "BaseBdev3", 00:10:04.729 "uuid": "31197ab5-057f-4fb8-a515-a4d4fea1fcc9", 00:10:04.729 "is_configured": true, 00:10:04.729 "data_offset": 2048, 00:10:04.729 "data_size": 63488 00:10:04.729 }, 00:10:04.729 { 00:10:04.729 "name": "BaseBdev4", 00:10:04.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.729 "is_configured": false, 00:10:04.729 "data_offset": 0, 00:10:04.729 "data_size": 0 00:10:04.729 } 00:10:04.729 ] 00:10:04.729 }' 00:10:04.729 12:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.729 12:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.989 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:04.989 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.989 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.989 [2024-11-19 12:30:10.243685] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:04.989 [2024-11-19 12:30:10.243905] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:04.989 [2024-11-19 12:30:10.243922] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:04.989 BaseBdev4 00:10:04.989 [2024-11-19 12:30:10.244190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:04.989 [2024-11-19 12:30:10.244325] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:04.989 [2024-11-19 12:30:10.244344] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:04.989 [2024-11-19 12:30:10.244477] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.989 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.989 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:04.989 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:04.989 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:04.989 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:04.989 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:04.989 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:04.989 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:04.989 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.989 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.248 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.249 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:05.249 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.249 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.249 [ 00:10:05.249 { 00:10:05.249 "name": "BaseBdev4", 00:10:05.249 "aliases": [ 00:10:05.249 "131c6ea8-f21f-4b2f-908a-718640c93df4" 00:10:05.249 ], 00:10:05.249 "product_name": "Malloc disk", 00:10:05.249 "block_size": 512, 00:10:05.249 "num_blocks": 65536, 00:10:05.249 "uuid": "131c6ea8-f21f-4b2f-908a-718640c93df4", 00:10:05.249 "assigned_rate_limits": { 00:10:05.249 "rw_ios_per_sec": 0, 00:10:05.249 "rw_mbytes_per_sec": 0, 00:10:05.249 "r_mbytes_per_sec": 0, 00:10:05.249 "w_mbytes_per_sec": 0 00:10:05.249 }, 00:10:05.249 "claimed": true, 00:10:05.249 "claim_type": "exclusive_write", 00:10:05.249 "zoned": false, 00:10:05.249 "supported_io_types": { 00:10:05.249 "read": true, 00:10:05.249 "write": true, 00:10:05.249 "unmap": true, 00:10:05.249 "flush": true, 00:10:05.249 "reset": true, 00:10:05.249 "nvme_admin": false, 00:10:05.249 "nvme_io": false, 00:10:05.249 "nvme_io_md": false, 00:10:05.249 "write_zeroes": true, 00:10:05.249 "zcopy": true, 00:10:05.249 "get_zone_info": false, 00:10:05.249 "zone_management": false, 00:10:05.249 "zone_append": false, 00:10:05.249 "compare": false, 00:10:05.249 "compare_and_write": false, 00:10:05.249 "abort": true, 00:10:05.249 "seek_hole": false, 00:10:05.249 "seek_data": false, 00:10:05.249 "copy": true, 00:10:05.249 "nvme_iov_md": false 00:10:05.249 }, 00:10:05.249 "memory_domains": [ 00:10:05.249 { 00:10:05.249 "dma_device_id": "system", 00:10:05.249 "dma_device_type": 1 00:10:05.249 }, 00:10:05.249 { 00:10:05.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.249 "dma_device_type": 2 00:10:05.249 } 00:10:05.249 ], 00:10:05.249 "driver_specific": {} 00:10:05.249 } 00:10:05.249 ] 00:10:05.249 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.249 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:05.249 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:05.249 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:05.249 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:05.249 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.249 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.249 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.249 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.249 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.249 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.249 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.249 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.249 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.249 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.249 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.249 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.249 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.249 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.249 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.249 "name": "Existed_Raid", 00:10:05.249 "uuid": "f86d9fb0-51c2-4ad6-ba38-28de381a27c2", 00:10:05.249 "strip_size_kb": 64, 00:10:05.249 "state": "online", 00:10:05.249 "raid_level": "raid0", 00:10:05.249 "superblock": true, 00:10:05.249 "num_base_bdevs": 4, 00:10:05.249 "num_base_bdevs_discovered": 4, 00:10:05.249 "num_base_bdevs_operational": 4, 00:10:05.249 "base_bdevs_list": [ 00:10:05.249 { 00:10:05.249 "name": "BaseBdev1", 00:10:05.249 "uuid": "71aa98db-f85c-4938-95fa-35154cc69d4d", 00:10:05.249 "is_configured": true, 00:10:05.249 "data_offset": 2048, 00:10:05.249 "data_size": 63488 00:10:05.249 }, 00:10:05.249 { 00:10:05.249 "name": "BaseBdev2", 00:10:05.249 "uuid": "cfa9cb03-4ecb-40d8-8fd5-ddf674ed60d8", 00:10:05.249 "is_configured": true, 00:10:05.249 "data_offset": 2048, 00:10:05.249 "data_size": 63488 00:10:05.249 }, 00:10:05.249 { 00:10:05.249 "name": "BaseBdev3", 00:10:05.249 "uuid": "31197ab5-057f-4fb8-a515-a4d4fea1fcc9", 00:10:05.249 "is_configured": true, 00:10:05.249 "data_offset": 2048, 00:10:05.249 "data_size": 63488 00:10:05.249 }, 00:10:05.249 { 00:10:05.249 "name": "BaseBdev4", 00:10:05.249 "uuid": "131c6ea8-f21f-4b2f-908a-718640c93df4", 00:10:05.249 "is_configured": true, 00:10:05.249 "data_offset": 2048, 00:10:05.249 "data_size": 63488 00:10:05.249 } 00:10:05.249 ] 00:10:05.249 }' 00:10:05.249 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.249 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.509 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:05.509 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:05.509 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:05.509 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:05.509 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:05.509 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:05.509 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:05.509 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:05.509 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.509 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.509 [2024-11-19 12:30:10.739300] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:05.509 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.769 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:05.769 "name": "Existed_Raid", 00:10:05.769 "aliases": [ 00:10:05.769 "f86d9fb0-51c2-4ad6-ba38-28de381a27c2" 00:10:05.769 ], 00:10:05.769 "product_name": "Raid Volume", 00:10:05.769 "block_size": 512, 00:10:05.769 "num_blocks": 253952, 00:10:05.769 "uuid": "f86d9fb0-51c2-4ad6-ba38-28de381a27c2", 00:10:05.769 "assigned_rate_limits": { 00:10:05.769 "rw_ios_per_sec": 0, 00:10:05.769 "rw_mbytes_per_sec": 0, 00:10:05.769 "r_mbytes_per_sec": 0, 00:10:05.769 "w_mbytes_per_sec": 0 00:10:05.769 }, 00:10:05.769 "claimed": false, 00:10:05.769 "zoned": false, 00:10:05.769 "supported_io_types": { 00:10:05.769 "read": true, 00:10:05.769 "write": true, 00:10:05.769 "unmap": true, 00:10:05.769 "flush": true, 00:10:05.769 "reset": true, 00:10:05.769 "nvme_admin": false, 00:10:05.769 "nvme_io": false, 00:10:05.769 "nvme_io_md": false, 00:10:05.769 "write_zeroes": true, 00:10:05.769 "zcopy": false, 00:10:05.769 "get_zone_info": false, 00:10:05.769 "zone_management": false, 00:10:05.769 "zone_append": false, 00:10:05.769 "compare": false, 00:10:05.769 "compare_and_write": false, 00:10:05.769 "abort": false, 00:10:05.769 "seek_hole": false, 00:10:05.769 "seek_data": false, 00:10:05.769 "copy": false, 00:10:05.769 "nvme_iov_md": false 00:10:05.769 }, 00:10:05.769 "memory_domains": [ 00:10:05.769 { 00:10:05.769 "dma_device_id": "system", 00:10:05.769 "dma_device_type": 1 00:10:05.769 }, 00:10:05.769 { 00:10:05.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.769 "dma_device_type": 2 00:10:05.769 }, 00:10:05.769 { 00:10:05.769 "dma_device_id": "system", 00:10:05.769 "dma_device_type": 1 00:10:05.769 }, 00:10:05.769 { 00:10:05.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.769 "dma_device_type": 2 00:10:05.769 }, 00:10:05.769 { 00:10:05.769 "dma_device_id": "system", 00:10:05.769 "dma_device_type": 1 00:10:05.769 }, 00:10:05.769 { 00:10:05.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.769 "dma_device_type": 2 00:10:05.769 }, 00:10:05.769 { 00:10:05.769 "dma_device_id": "system", 00:10:05.769 "dma_device_type": 1 00:10:05.769 }, 00:10:05.769 { 00:10:05.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.769 "dma_device_type": 2 00:10:05.769 } 00:10:05.769 ], 00:10:05.769 "driver_specific": { 00:10:05.769 "raid": { 00:10:05.769 "uuid": "f86d9fb0-51c2-4ad6-ba38-28de381a27c2", 00:10:05.769 "strip_size_kb": 64, 00:10:05.769 "state": "online", 00:10:05.769 "raid_level": "raid0", 00:10:05.769 "superblock": true, 00:10:05.769 "num_base_bdevs": 4, 00:10:05.769 "num_base_bdevs_discovered": 4, 00:10:05.769 "num_base_bdevs_operational": 4, 00:10:05.769 "base_bdevs_list": [ 00:10:05.769 { 00:10:05.769 "name": "BaseBdev1", 00:10:05.769 "uuid": "71aa98db-f85c-4938-95fa-35154cc69d4d", 00:10:05.769 "is_configured": true, 00:10:05.769 "data_offset": 2048, 00:10:05.769 "data_size": 63488 00:10:05.769 }, 00:10:05.769 { 00:10:05.769 "name": "BaseBdev2", 00:10:05.769 "uuid": "cfa9cb03-4ecb-40d8-8fd5-ddf674ed60d8", 00:10:05.769 "is_configured": true, 00:10:05.769 "data_offset": 2048, 00:10:05.769 "data_size": 63488 00:10:05.769 }, 00:10:05.769 { 00:10:05.769 "name": "BaseBdev3", 00:10:05.769 "uuid": "31197ab5-057f-4fb8-a515-a4d4fea1fcc9", 00:10:05.769 "is_configured": true, 00:10:05.769 "data_offset": 2048, 00:10:05.769 "data_size": 63488 00:10:05.769 }, 00:10:05.769 { 00:10:05.769 "name": "BaseBdev4", 00:10:05.769 "uuid": "131c6ea8-f21f-4b2f-908a-718640c93df4", 00:10:05.769 "is_configured": true, 00:10:05.769 "data_offset": 2048, 00:10:05.769 "data_size": 63488 00:10:05.769 } 00:10:05.769 ] 00:10:05.769 } 00:10:05.769 } 00:10:05.769 }' 00:10:05.769 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:05.769 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:05.769 BaseBdev2 00:10:05.769 BaseBdev3 00:10:05.769 BaseBdev4' 00:10:05.769 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.769 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:05.769 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.769 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.769 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:05.769 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.769 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.769 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.769 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.769 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.769 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.769 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:05.769 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.770 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.770 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.770 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.770 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.770 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.770 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.770 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:05.770 12:30:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.770 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.770 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.770 12:30:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.770 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.770 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.770 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.770 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:05.770 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.770 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.770 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.030 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.030 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.030 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.030 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:06.030 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.030 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.030 [2024-11-19 12:30:11.074878] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:06.030 [2024-11-19 12:30:11.074919] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:06.030 [2024-11-19 12:30:11.075004] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:06.030 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.030 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:06.030 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:06.030 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:06.030 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:06.030 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:06.030 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:06.030 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.030 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:06.030 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.030 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.030 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.030 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.030 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.030 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.030 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.030 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.030 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.030 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.030 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.030 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.030 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.030 "name": "Existed_Raid", 00:10:06.030 "uuid": "f86d9fb0-51c2-4ad6-ba38-28de381a27c2", 00:10:06.030 "strip_size_kb": 64, 00:10:06.030 "state": "offline", 00:10:06.030 "raid_level": "raid0", 00:10:06.030 "superblock": true, 00:10:06.030 "num_base_bdevs": 4, 00:10:06.030 "num_base_bdevs_discovered": 3, 00:10:06.030 "num_base_bdevs_operational": 3, 00:10:06.030 "base_bdevs_list": [ 00:10:06.030 { 00:10:06.030 "name": null, 00:10:06.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.030 "is_configured": false, 00:10:06.030 "data_offset": 0, 00:10:06.030 "data_size": 63488 00:10:06.030 }, 00:10:06.030 { 00:10:06.030 "name": "BaseBdev2", 00:10:06.030 "uuid": "cfa9cb03-4ecb-40d8-8fd5-ddf674ed60d8", 00:10:06.030 "is_configured": true, 00:10:06.030 "data_offset": 2048, 00:10:06.030 "data_size": 63488 00:10:06.030 }, 00:10:06.030 { 00:10:06.030 "name": "BaseBdev3", 00:10:06.030 "uuid": "31197ab5-057f-4fb8-a515-a4d4fea1fcc9", 00:10:06.030 "is_configured": true, 00:10:06.030 "data_offset": 2048, 00:10:06.030 "data_size": 63488 00:10:06.030 }, 00:10:06.030 { 00:10:06.030 "name": "BaseBdev4", 00:10:06.030 "uuid": "131c6ea8-f21f-4b2f-908a-718640c93df4", 00:10:06.030 "is_configured": true, 00:10:06.030 "data_offset": 2048, 00:10:06.030 "data_size": 63488 00:10:06.030 } 00:10:06.030 ] 00:10:06.030 }' 00:10:06.030 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.030 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.290 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:06.290 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:06.290 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.290 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.290 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.290 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:06.290 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.290 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:06.290 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:06.290 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:06.290 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.290 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.290 [2024-11-19 12:30:11.541945] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:06.549 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.549 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:06.549 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:06.549 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.549 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:06.549 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.549 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.549 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.549 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:06.549 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:06.549 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:06.549 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.549 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.549 [2024-11-19 12:30:11.609647] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:06.549 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.549 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:06.549 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:06.549 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.549 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.550 [2024-11-19 12:30:11.681255] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:06.550 [2024-11-19 12:30:11.681315] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.550 BaseBdev2 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.550 [ 00:10:06.550 { 00:10:06.550 "name": "BaseBdev2", 00:10:06.550 "aliases": [ 00:10:06.550 "9f891e8b-3e26-4e53-a1df-523cbf2c0e47" 00:10:06.550 ], 00:10:06.550 "product_name": "Malloc disk", 00:10:06.550 "block_size": 512, 00:10:06.550 "num_blocks": 65536, 00:10:06.550 "uuid": "9f891e8b-3e26-4e53-a1df-523cbf2c0e47", 00:10:06.550 "assigned_rate_limits": { 00:10:06.550 "rw_ios_per_sec": 0, 00:10:06.550 "rw_mbytes_per_sec": 0, 00:10:06.550 "r_mbytes_per_sec": 0, 00:10:06.550 "w_mbytes_per_sec": 0 00:10:06.550 }, 00:10:06.550 "claimed": false, 00:10:06.550 "zoned": false, 00:10:06.550 "supported_io_types": { 00:10:06.550 "read": true, 00:10:06.550 "write": true, 00:10:06.550 "unmap": true, 00:10:06.550 "flush": true, 00:10:06.550 "reset": true, 00:10:06.550 "nvme_admin": false, 00:10:06.550 "nvme_io": false, 00:10:06.550 "nvme_io_md": false, 00:10:06.550 "write_zeroes": true, 00:10:06.550 "zcopy": true, 00:10:06.550 "get_zone_info": false, 00:10:06.550 "zone_management": false, 00:10:06.550 "zone_append": false, 00:10:06.550 "compare": false, 00:10:06.550 "compare_and_write": false, 00:10:06.550 "abort": true, 00:10:06.550 "seek_hole": false, 00:10:06.550 "seek_data": false, 00:10:06.550 "copy": true, 00:10:06.550 "nvme_iov_md": false 00:10:06.550 }, 00:10:06.550 "memory_domains": [ 00:10:06.550 { 00:10:06.550 "dma_device_id": "system", 00:10:06.550 "dma_device_type": 1 00:10:06.550 }, 00:10:06.550 { 00:10:06.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.550 "dma_device_type": 2 00:10:06.550 } 00:10:06.550 ], 00:10:06.550 "driver_specific": {} 00:10:06.550 } 00:10:06.550 ] 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.550 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.810 BaseBdev3 00:10:06.810 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.810 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:06.810 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:06.810 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:06.810 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:06.810 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.811 [ 00:10:06.811 { 00:10:06.811 "name": "BaseBdev3", 00:10:06.811 "aliases": [ 00:10:06.811 "d28a933f-6c94-4989-8b8f-fd20263b69c8" 00:10:06.811 ], 00:10:06.811 "product_name": "Malloc disk", 00:10:06.811 "block_size": 512, 00:10:06.811 "num_blocks": 65536, 00:10:06.811 "uuid": "d28a933f-6c94-4989-8b8f-fd20263b69c8", 00:10:06.811 "assigned_rate_limits": { 00:10:06.811 "rw_ios_per_sec": 0, 00:10:06.811 "rw_mbytes_per_sec": 0, 00:10:06.811 "r_mbytes_per_sec": 0, 00:10:06.811 "w_mbytes_per_sec": 0 00:10:06.811 }, 00:10:06.811 "claimed": false, 00:10:06.811 "zoned": false, 00:10:06.811 "supported_io_types": { 00:10:06.811 "read": true, 00:10:06.811 "write": true, 00:10:06.811 "unmap": true, 00:10:06.811 "flush": true, 00:10:06.811 "reset": true, 00:10:06.811 "nvme_admin": false, 00:10:06.811 "nvme_io": false, 00:10:06.811 "nvme_io_md": false, 00:10:06.811 "write_zeroes": true, 00:10:06.811 "zcopy": true, 00:10:06.811 "get_zone_info": false, 00:10:06.811 "zone_management": false, 00:10:06.811 "zone_append": false, 00:10:06.811 "compare": false, 00:10:06.811 "compare_and_write": false, 00:10:06.811 "abort": true, 00:10:06.811 "seek_hole": false, 00:10:06.811 "seek_data": false, 00:10:06.811 "copy": true, 00:10:06.811 "nvme_iov_md": false 00:10:06.811 }, 00:10:06.811 "memory_domains": [ 00:10:06.811 { 00:10:06.811 "dma_device_id": "system", 00:10:06.811 "dma_device_type": 1 00:10:06.811 }, 00:10:06.811 { 00:10:06.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.811 "dma_device_type": 2 00:10:06.811 } 00:10:06.811 ], 00:10:06.811 "driver_specific": {} 00:10:06.811 } 00:10:06.811 ] 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.811 BaseBdev4 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.811 [ 00:10:06.811 { 00:10:06.811 "name": "BaseBdev4", 00:10:06.811 "aliases": [ 00:10:06.811 "314b081b-e6fa-4227-8be6-e78d7a2b7fec" 00:10:06.811 ], 00:10:06.811 "product_name": "Malloc disk", 00:10:06.811 "block_size": 512, 00:10:06.811 "num_blocks": 65536, 00:10:06.811 "uuid": "314b081b-e6fa-4227-8be6-e78d7a2b7fec", 00:10:06.811 "assigned_rate_limits": { 00:10:06.811 "rw_ios_per_sec": 0, 00:10:06.811 "rw_mbytes_per_sec": 0, 00:10:06.811 "r_mbytes_per_sec": 0, 00:10:06.811 "w_mbytes_per_sec": 0 00:10:06.811 }, 00:10:06.811 "claimed": false, 00:10:06.811 "zoned": false, 00:10:06.811 "supported_io_types": { 00:10:06.811 "read": true, 00:10:06.811 "write": true, 00:10:06.811 "unmap": true, 00:10:06.811 "flush": true, 00:10:06.811 "reset": true, 00:10:06.811 "nvme_admin": false, 00:10:06.811 "nvme_io": false, 00:10:06.811 "nvme_io_md": false, 00:10:06.811 "write_zeroes": true, 00:10:06.811 "zcopy": true, 00:10:06.811 "get_zone_info": false, 00:10:06.811 "zone_management": false, 00:10:06.811 "zone_append": false, 00:10:06.811 "compare": false, 00:10:06.811 "compare_and_write": false, 00:10:06.811 "abort": true, 00:10:06.811 "seek_hole": false, 00:10:06.811 "seek_data": false, 00:10:06.811 "copy": true, 00:10:06.811 "nvme_iov_md": false 00:10:06.811 }, 00:10:06.811 "memory_domains": [ 00:10:06.811 { 00:10:06.811 "dma_device_id": "system", 00:10:06.811 "dma_device_type": 1 00:10:06.811 }, 00:10:06.811 { 00:10:06.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.811 "dma_device_type": 2 00:10:06.811 } 00:10:06.811 ], 00:10:06.811 "driver_specific": {} 00:10:06.811 } 00:10:06.811 ] 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.811 [2024-11-19 12:30:11.908301] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:06.811 [2024-11-19 12:30:11.908345] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:06.811 [2024-11-19 12:30:11.908367] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.811 [2024-11-19 12:30:11.910297] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:06.811 [2024-11-19 12:30:11.910353] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.811 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.811 "name": "Existed_Raid", 00:10:06.811 "uuid": "6a1d0bcc-db91-42fa-a4a3-90d6fa9e19ac", 00:10:06.811 "strip_size_kb": 64, 00:10:06.811 "state": "configuring", 00:10:06.811 "raid_level": "raid0", 00:10:06.811 "superblock": true, 00:10:06.811 "num_base_bdevs": 4, 00:10:06.811 "num_base_bdevs_discovered": 3, 00:10:06.811 "num_base_bdevs_operational": 4, 00:10:06.811 "base_bdevs_list": [ 00:10:06.811 { 00:10:06.811 "name": "BaseBdev1", 00:10:06.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.811 "is_configured": false, 00:10:06.811 "data_offset": 0, 00:10:06.811 "data_size": 0 00:10:06.811 }, 00:10:06.811 { 00:10:06.811 "name": "BaseBdev2", 00:10:06.812 "uuid": "9f891e8b-3e26-4e53-a1df-523cbf2c0e47", 00:10:06.812 "is_configured": true, 00:10:06.812 "data_offset": 2048, 00:10:06.812 "data_size": 63488 00:10:06.812 }, 00:10:06.812 { 00:10:06.812 "name": "BaseBdev3", 00:10:06.812 "uuid": "d28a933f-6c94-4989-8b8f-fd20263b69c8", 00:10:06.812 "is_configured": true, 00:10:06.812 "data_offset": 2048, 00:10:06.812 "data_size": 63488 00:10:06.812 }, 00:10:06.812 { 00:10:06.812 "name": "BaseBdev4", 00:10:06.812 "uuid": "314b081b-e6fa-4227-8be6-e78d7a2b7fec", 00:10:06.812 "is_configured": true, 00:10:06.812 "data_offset": 2048, 00:10:06.812 "data_size": 63488 00:10:06.812 } 00:10:06.812 ] 00:10:06.812 }' 00:10:06.812 12:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.812 12:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.071 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:07.071 12:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.071 12:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.071 [2024-11-19 12:30:12.323642] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:07.071 12:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.071 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:07.071 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.071 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.071 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.071 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.071 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.071 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.071 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.071 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.071 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.331 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.331 12:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.331 12:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.331 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.331 12:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.331 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.331 "name": "Existed_Raid", 00:10:07.331 "uuid": "6a1d0bcc-db91-42fa-a4a3-90d6fa9e19ac", 00:10:07.331 "strip_size_kb": 64, 00:10:07.331 "state": "configuring", 00:10:07.331 "raid_level": "raid0", 00:10:07.331 "superblock": true, 00:10:07.331 "num_base_bdevs": 4, 00:10:07.331 "num_base_bdevs_discovered": 2, 00:10:07.331 "num_base_bdevs_operational": 4, 00:10:07.331 "base_bdevs_list": [ 00:10:07.331 { 00:10:07.331 "name": "BaseBdev1", 00:10:07.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.331 "is_configured": false, 00:10:07.331 "data_offset": 0, 00:10:07.331 "data_size": 0 00:10:07.331 }, 00:10:07.331 { 00:10:07.331 "name": null, 00:10:07.331 "uuid": "9f891e8b-3e26-4e53-a1df-523cbf2c0e47", 00:10:07.331 "is_configured": false, 00:10:07.331 "data_offset": 0, 00:10:07.331 "data_size": 63488 00:10:07.331 }, 00:10:07.331 { 00:10:07.331 "name": "BaseBdev3", 00:10:07.331 "uuid": "d28a933f-6c94-4989-8b8f-fd20263b69c8", 00:10:07.331 "is_configured": true, 00:10:07.331 "data_offset": 2048, 00:10:07.331 "data_size": 63488 00:10:07.331 }, 00:10:07.331 { 00:10:07.331 "name": "BaseBdev4", 00:10:07.331 "uuid": "314b081b-e6fa-4227-8be6-e78d7a2b7fec", 00:10:07.331 "is_configured": true, 00:10:07.331 "data_offset": 2048, 00:10:07.331 "data_size": 63488 00:10:07.331 } 00:10:07.331 ] 00:10:07.331 }' 00:10:07.331 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.331 12:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.590 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.591 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:07.591 12:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.591 12:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.591 12:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.850 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.851 [2024-11-19 12:30:12.874022] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:07.851 BaseBdev1 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.851 [ 00:10:07.851 { 00:10:07.851 "name": "BaseBdev1", 00:10:07.851 "aliases": [ 00:10:07.851 "27c26c7d-c489-4609-8757-4272986a47a5" 00:10:07.851 ], 00:10:07.851 "product_name": "Malloc disk", 00:10:07.851 "block_size": 512, 00:10:07.851 "num_blocks": 65536, 00:10:07.851 "uuid": "27c26c7d-c489-4609-8757-4272986a47a5", 00:10:07.851 "assigned_rate_limits": { 00:10:07.851 "rw_ios_per_sec": 0, 00:10:07.851 "rw_mbytes_per_sec": 0, 00:10:07.851 "r_mbytes_per_sec": 0, 00:10:07.851 "w_mbytes_per_sec": 0 00:10:07.851 }, 00:10:07.851 "claimed": true, 00:10:07.851 "claim_type": "exclusive_write", 00:10:07.851 "zoned": false, 00:10:07.851 "supported_io_types": { 00:10:07.851 "read": true, 00:10:07.851 "write": true, 00:10:07.851 "unmap": true, 00:10:07.851 "flush": true, 00:10:07.851 "reset": true, 00:10:07.851 "nvme_admin": false, 00:10:07.851 "nvme_io": false, 00:10:07.851 "nvme_io_md": false, 00:10:07.851 "write_zeroes": true, 00:10:07.851 "zcopy": true, 00:10:07.851 "get_zone_info": false, 00:10:07.851 "zone_management": false, 00:10:07.851 "zone_append": false, 00:10:07.851 "compare": false, 00:10:07.851 "compare_and_write": false, 00:10:07.851 "abort": true, 00:10:07.851 "seek_hole": false, 00:10:07.851 "seek_data": false, 00:10:07.851 "copy": true, 00:10:07.851 "nvme_iov_md": false 00:10:07.851 }, 00:10:07.851 "memory_domains": [ 00:10:07.851 { 00:10:07.851 "dma_device_id": "system", 00:10:07.851 "dma_device_type": 1 00:10:07.851 }, 00:10:07.851 { 00:10:07.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.851 "dma_device_type": 2 00:10:07.851 } 00:10:07.851 ], 00:10:07.851 "driver_specific": {} 00:10:07.851 } 00:10:07.851 ] 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.851 "name": "Existed_Raid", 00:10:07.851 "uuid": "6a1d0bcc-db91-42fa-a4a3-90d6fa9e19ac", 00:10:07.851 "strip_size_kb": 64, 00:10:07.851 "state": "configuring", 00:10:07.851 "raid_level": "raid0", 00:10:07.851 "superblock": true, 00:10:07.851 "num_base_bdevs": 4, 00:10:07.851 "num_base_bdevs_discovered": 3, 00:10:07.851 "num_base_bdevs_operational": 4, 00:10:07.851 "base_bdevs_list": [ 00:10:07.851 { 00:10:07.851 "name": "BaseBdev1", 00:10:07.851 "uuid": "27c26c7d-c489-4609-8757-4272986a47a5", 00:10:07.851 "is_configured": true, 00:10:07.851 "data_offset": 2048, 00:10:07.851 "data_size": 63488 00:10:07.851 }, 00:10:07.851 { 00:10:07.851 "name": null, 00:10:07.851 "uuid": "9f891e8b-3e26-4e53-a1df-523cbf2c0e47", 00:10:07.851 "is_configured": false, 00:10:07.851 "data_offset": 0, 00:10:07.851 "data_size": 63488 00:10:07.851 }, 00:10:07.851 { 00:10:07.851 "name": "BaseBdev3", 00:10:07.851 "uuid": "d28a933f-6c94-4989-8b8f-fd20263b69c8", 00:10:07.851 "is_configured": true, 00:10:07.851 "data_offset": 2048, 00:10:07.851 "data_size": 63488 00:10:07.851 }, 00:10:07.851 { 00:10:07.851 "name": "BaseBdev4", 00:10:07.851 "uuid": "314b081b-e6fa-4227-8be6-e78d7a2b7fec", 00:10:07.851 "is_configured": true, 00:10:07.851 "data_offset": 2048, 00:10:07.851 "data_size": 63488 00:10:07.851 } 00:10:07.851 ] 00:10:07.851 }' 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.851 12:30:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.420 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.420 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:08.420 12:30:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.420 12:30:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.420 12:30:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.420 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:08.420 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:08.420 12:30:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.420 12:30:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.420 [2024-11-19 12:30:13.457125] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:08.420 12:30:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.420 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:08.420 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.421 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.421 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.421 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.421 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.421 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.421 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.421 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.421 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.421 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.421 12:30:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.421 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.421 12:30:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.421 12:30:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.421 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.421 "name": "Existed_Raid", 00:10:08.421 "uuid": "6a1d0bcc-db91-42fa-a4a3-90d6fa9e19ac", 00:10:08.421 "strip_size_kb": 64, 00:10:08.421 "state": "configuring", 00:10:08.421 "raid_level": "raid0", 00:10:08.421 "superblock": true, 00:10:08.421 "num_base_bdevs": 4, 00:10:08.421 "num_base_bdevs_discovered": 2, 00:10:08.421 "num_base_bdevs_operational": 4, 00:10:08.421 "base_bdevs_list": [ 00:10:08.421 { 00:10:08.421 "name": "BaseBdev1", 00:10:08.421 "uuid": "27c26c7d-c489-4609-8757-4272986a47a5", 00:10:08.421 "is_configured": true, 00:10:08.421 "data_offset": 2048, 00:10:08.421 "data_size": 63488 00:10:08.421 }, 00:10:08.421 { 00:10:08.421 "name": null, 00:10:08.421 "uuid": "9f891e8b-3e26-4e53-a1df-523cbf2c0e47", 00:10:08.421 "is_configured": false, 00:10:08.421 "data_offset": 0, 00:10:08.421 "data_size": 63488 00:10:08.421 }, 00:10:08.421 { 00:10:08.421 "name": null, 00:10:08.421 "uuid": "d28a933f-6c94-4989-8b8f-fd20263b69c8", 00:10:08.421 "is_configured": false, 00:10:08.421 "data_offset": 0, 00:10:08.421 "data_size": 63488 00:10:08.421 }, 00:10:08.421 { 00:10:08.421 "name": "BaseBdev4", 00:10:08.421 "uuid": "314b081b-e6fa-4227-8be6-e78d7a2b7fec", 00:10:08.421 "is_configured": true, 00:10:08.421 "data_offset": 2048, 00:10:08.421 "data_size": 63488 00:10:08.421 } 00:10:08.421 ] 00:10:08.421 }' 00:10:08.421 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.421 12:30:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.680 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.680 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:08.680 12:30:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.680 12:30:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.940 12:30:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.940 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:08.940 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:08.940 12:30:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.940 12:30:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.940 [2024-11-19 12:30:13.976339] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:08.940 12:30:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.940 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:08.940 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.940 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.940 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.940 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.940 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.940 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.940 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.940 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.940 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.940 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.940 12:30:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.940 12:30:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.940 12:30:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.940 12:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.940 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.940 "name": "Existed_Raid", 00:10:08.940 "uuid": "6a1d0bcc-db91-42fa-a4a3-90d6fa9e19ac", 00:10:08.940 "strip_size_kb": 64, 00:10:08.940 "state": "configuring", 00:10:08.940 "raid_level": "raid0", 00:10:08.940 "superblock": true, 00:10:08.940 "num_base_bdevs": 4, 00:10:08.940 "num_base_bdevs_discovered": 3, 00:10:08.940 "num_base_bdevs_operational": 4, 00:10:08.940 "base_bdevs_list": [ 00:10:08.940 { 00:10:08.940 "name": "BaseBdev1", 00:10:08.940 "uuid": "27c26c7d-c489-4609-8757-4272986a47a5", 00:10:08.940 "is_configured": true, 00:10:08.940 "data_offset": 2048, 00:10:08.940 "data_size": 63488 00:10:08.940 }, 00:10:08.940 { 00:10:08.940 "name": null, 00:10:08.940 "uuid": "9f891e8b-3e26-4e53-a1df-523cbf2c0e47", 00:10:08.940 "is_configured": false, 00:10:08.940 "data_offset": 0, 00:10:08.940 "data_size": 63488 00:10:08.940 }, 00:10:08.940 { 00:10:08.940 "name": "BaseBdev3", 00:10:08.940 "uuid": "d28a933f-6c94-4989-8b8f-fd20263b69c8", 00:10:08.940 "is_configured": true, 00:10:08.940 "data_offset": 2048, 00:10:08.940 "data_size": 63488 00:10:08.940 }, 00:10:08.940 { 00:10:08.940 "name": "BaseBdev4", 00:10:08.940 "uuid": "314b081b-e6fa-4227-8be6-e78d7a2b7fec", 00:10:08.940 "is_configured": true, 00:10:08.940 "data_offset": 2048, 00:10:08.940 "data_size": 63488 00:10:08.940 } 00:10:08.940 ] 00:10:08.940 }' 00:10:08.940 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.940 12:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.199 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.199 12:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.199 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:09.199 12:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.199 12:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.199 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:09.199 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:09.199 12:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.199 12:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.459 [2024-11-19 12:30:14.459537] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:09.459 12:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.459 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:09.459 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.459 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.459 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.459 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.459 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.459 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.459 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.459 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.459 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.459 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.459 12:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.459 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.459 12:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.459 12:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.459 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.459 "name": "Existed_Raid", 00:10:09.459 "uuid": "6a1d0bcc-db91-42fa-a4a3-90d6fa9e19ac", 00:10:09.459 "strip_size_kb": 64, 00:10:09.459 "state": "configuring", 00:10:09.459 "raid_level": "raid0", 00:10:09.459 "superblock": true, 00:10:09.459 "num_base_bdevs": 4, 00:10:09.459 "num_base_bdevs_discovered": 2, 00:10:09.459 "num_base_bdevs_operational": 4, 00:10:09.459 "base_bdevs_list": [ 00:10:09.459 { 00:10:09.459 "name": null, 00:10:09.459 "uuid": "27c26c7d-c489-4609-8757-4272986a47a5", 00:10:09.459 "is_configured": false, 00:10:09.459 "data_offset": 0, 00:10:09.459 "data_size": 63488 00:10:09.459 }, 00:10:09.459 { 00:10:09.459 "name": null, 00:10:09.459 "uuid": "9f891e8b-3e26-4e53-a1df-523cbf2c0e47", 00:10:09.459 "is_configured": false, 00:10:09.459 "data_offset": 0, 00:10:09.459 "data_size": 63488 00:10:09.459 }, 00:10:09.459 { 00:10:09.459 "name": "BaseBdev3", 00:10:09.459 "uuid": "d28a933f-6c94-4989-8b8f-fd20263b69c8", 00:10:09.459 "is_configured": true, 00:10:09.459 "data_offset": 2048, 00:10:09.459 "data_size": 63488 00:10:09.459 }, 00:10:09.459 { 00:10:09.459 "name": "BaseBdev4", 00:10:09.459 "uuid": "314b081b-e6fa-4227-8be6-e78d7a2b7fec", 00:10:09.459 "is_configured": true, 00:10:09.459 "data_offset": 2048, 00:10:09.459 "data_size": 63488 00:10:09.459 } 00:10:09.459 ] 00:10:09.459 }' 00:10:09.459 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.459 12:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.719 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.719 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:09.719 12:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.719 12:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.719 12:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.979 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:09.979 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:09.979 12:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.979 12:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.979 [2024-11-19 12:30:14.993573] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.979 12:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.979 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:09.979 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.979 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.979 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.979 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.979 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.979 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.979 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.979 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.979 12:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.979 12:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.979 12:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.979 12:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.979 12:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.979 12:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.979 12:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.979 "name": "Existed_Raid", 00:10:09.979 "uuid": "6a1d0bcc-db91-42fa-a4a3-90d6fa9e19ac", 00:10:09.979 "strip_size_kb": 64, 00:10:09.979 "state": "configuring", 00:10:09.979 "raid_level": "raid0", 00:10:09.979 "superblock": true, 00:10:09.979 "num_base_bdevs": 4, 00:10:09.979 "num_base_bdevs_discovered": 3, 00:10:09.979 "num_base_bdevs_operational": 4, 00:10:09.979 "base_bdevs_list": [ 00:10:09.979 { 00:10:09.979 "name": null, 00:10:09.979 "uuid": "27c26c7d-c489-4609-8757-4272986a47a5", 00:10:09.979 "is_configured": false, 00:10:09.979 "data_offset": 0, 00:10:09.979 "data_size": 63488 00:10:09.979 }, 00:10:09.979 { 00:10:09.979 "name": "BaseBdev2", 00:10:09.979 "uuid": "9f891e8b-3e26-4e53-a1df-523cbf2c0e47", 00:10:09.979 "is_configured": true, 00:10:09.979 "data_offset": 2048, 00:10:09.979 "data_size": 63488 00:10:09.979 }, 00:10:09.979 { 00:10:09.979 "name": "BaseBdev3", 00:10:09.979 "uuid": "d28a933f-6c94-4989-8b8f-fd20263b69c8", 00:10:09.979 "is_configured": true, 00:10:09.979 "data_offset": 2048, 00:10:09.979 "data_size": 63488 00:10:09.979 }, 00:10:09.979 { 00:10:09.979 "name": "BaseBdev4", 00:10:09.979 "uuid": "314b081b-e6fa-4227-8be6-e78d7a2b7fec", 00:10:09.979 "is_configured": true, 00:10:09.979 "data_offset": 2048, 00:10:09.979 "data_size": 63488 00:10:09.979 } 00:10:09.979 ] 00:10:09.979 }' 00:10:09.979 12:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.979 12:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.239 12:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.239 12:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:10.239 12:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.239 12:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.239 12:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.239 12:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:10.498 12:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.498 12:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.498 12:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.498 12:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:10.498 12:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.498 12:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 27c26c7d-c489-4609-8757-4272986a47a5 00:10:10.498 12:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.498 12:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.498 [2024-11-19 12:30:15.559655] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:10.498 [2024-11-19 12:30:15.559862] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:10.498 [2024-11-19 12:30:15.559877] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:10.498 NewBaseBdev 00:10:10.499 [2024-11-19 12:30:15.560141] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:10.499 [2024-11-19 12:30:15.560275] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:10.499 [2024-11-19 12:30:15.560289] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:10.499 [2024-11-19 12:30:15.560384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.499 [ 00:10:10.499 { 00:10:10.499 "name": "NewBaseBdev", 00:10:10.499 "aliases": [ 00:10:10.499 "27c26c7d-c489-4609-8757-4272986a47a5" 00:10:10.499 ], 00:10:10.499 "product_name": "Malloc disk", 00:10:10.499 "block_size": 512, 00:10:10.499 "num_blocks": 65536, 00:10:10.499 "uuid": "27c26c7d-c489-4609-8757-4272986a47a5", 00:10:10.499 "assigned_rate_limits": { 00:10:10.499 "rw_ios_per_sec": 0, 00:10:10.499 "rw_mbytes_per_sec": 0, 00:10:10.499 "r_mbytes_per_sec": 0, 00:10:10.499 "w_mbytes_per_sec": 0 00:10:10.499 }, 00:10:10.499 "claimed": true, 00:10:10.499 "claim_type": "exclusive_write", 00:10:10.499 "zoned": false, 00:10:10.499 "supported_io_types": { 00:10:10.499 "read": true, 00:10:10.499 "write": true, 00:10:10.499 "unmap": true, 00:10:10.499 "flush": true, 00:10:10.499 "reset": true, 00:10:10.499 "nvme_admin": false, 00:10:10.499 "nvme_io": false, 00:10:10.499 "nvme_io_md": false, 00:10:10.499 "write_zeroes": true, 00:10:10.499 "zcopy": true, 00:10:10.499 "get_zone_info": false, 00:10:10.499 "zone_management": false, 00:10:10.499 "zone_append": false, 00:10:10.499 "compare": false, 00:10:10.499 "compare_and_write": false, 00:10:10.499 "abort": true, 00:10:10.499 "seek_hole": false, 00:10:10.499 "seek_data": false, 00:10:10.499 "copy": true, 00:10:10.499 "nvme_iov_md": false 00:10:10.499 }, 00:10:10.499 "memory_domains": [ 00:10:10.499 { 00:10:10.499 "dma_device_id": "system", 00:10:10.499 "dma_device_type": 1 00:10:10.499 }, 00:10:10.499 { 00:10:10.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.499 "dma_device_type": 2 00:10:10.499 } 00:10:10.499 ], 00:10:10.499 "driver_specific": {} 00:10:10.499 } 00:10:10.499 ] 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.499 "name": "Existed_Raid", 00:10:10.499 "uuid": "6a1d0bcc-db91-42fa-a4a3-90d6fa9e19ac", 00:10:10.499 "strip_size_kb": 64, 00:10:10.499 "state": "online", 00:10:10.499 "raid_level": "raid0", 00:10:10.499 "superblock": true, 00:10:10.499 "num_base_bdevs": 4, 00:10:10.499 "num_base_bdevs_discovered": 4, 00:10:10.499 "num_base_bdevs_operational": 4, 00:10:10.499 "base_bdevs_list": [ 00:10:10.499 { 00:10:10.499 "name": "NewBaseBdev", 00:10:10.499 "uuid": "27c26c7d-c489-4609-8757-4272986a47a5", 00:10:10.499 "is_configured": true, 00:10:10.499 "data_offset": 2048, 00:10:10.499 "data_size": 63488 00:10:10.499 }, 00:10:10.499 { 00:10:10.499 "name": "BaseBdev2", 00:10:10.499 "uuid": "9f891e8b-3e26-4e53-a1df-523cbf2c0e47", 00:10:10.499 "is_configured": true, 00:10:10.499 "data_offset": 2048, 00:10:10.499 "data_size": 63488 00:10:10.499 }, 00:10:10.499 { 00:10:10.499 "name": "BaseBdev3", 00:10:10.499 "uuid": "d28a933f-6c94-4989-8b8f-fd20263b69c8", 00:10:10.499 "is_configured": true, 00:10:10.499 "data_offset": 2048, 00:10:10.499 "data_size": 63488 00:10:10.499 }, 00:10:10.499 { 00:10:10.499 "name": "BaseBdev4", 00:10:10.499 "uuid": "314b081b-e6fa-4227-8be6-e78d7a2b7fec", 00:10:10.499 "is_configured": true, 00:10:10.499 "data_offset": 2048, 00:10:10.499 "data_size": 63488 00:10:10.499 } 00:10:10.499 ] 00:10:10.499 }' 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.499 12:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.757 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:10.757 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:10.757 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:10.757 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:10.758 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:10.758 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:11.016 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:11.016 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:11.016 12:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.016 12:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.016 [2024-11-19 12:30:16.023362] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.016 12:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.016 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:11.016 "name": "Existed_Raid", 00:10:11.016 "aliases": [ 00:10:11.016 "6a1d0bcc-db91-42fa-a4a3-90d6fa9e19ac" 00:10:11.016 ], 00:10:11.016 "product_name": "Raid Volume", 00:10:11.016 "block_size": 512, 00:10:11.016 "num_blocks": 253952, 00:10:11.016 "uuid": "6a1d0bcc-db91-42fa-a4a3-90d6fa9e19ac", 00:10:11.016 "assigned_rate_limits": { 00:10:11.016 "rw_ios_per_sec": 0, 00:10:11.016 "rw_mbytes_per_sec": 0, 00:10:11.016 "r_mbytes_per_sec": 0, 00:10:11.016 "w_mbytes_per_sec": 0 00:10:11.016 }, 00:10:11.016 "claimed": false, 00:10:11.016 "zoned": false, 00:10:11.016 "supported_io_types": { 00:10:11.016 "read": true, 00:10:11.016 "write": true, 00:10:11.016 "unmap": true, 00:10:11.016 "flush": true, 00:10:11.016 "reset": true, 00:10:11.016 "nvme_admin": false, 00:10:11.016 "nvme_io": false, 00:10:11.016 "nvme_io_md": false, 00:10:11.016 "write_zeroes": true, 00:10:11.016 "zcopy": false, 00:10:11.016 "get_zone_info": false, 00:10:11.016 "zone_management": false, 00:10:11.016 "zone_append": false, 00:10:11.016 "compare": false, 00:10:11.016 "compare_and_write": false, 00:10:11.016 "abort": false, 00:10:11.016 "seek_hole": false, 00:10:11.016 "seek_data": false, 00:10:11.016 "copy": false, 00:10:11.016 "nvme_iov_md": false 00:10:11.016 }, 00:10:11.016 "memory_domains": [ 00:10:11.016 { 00:10:11.016 "dma_device_id": "system", 00:10:11.016 "dma_device_type": 1 00:10:11.016 }, 00:10:11.016 { 00:10:11.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.016 "dma_device_type": 2 00:10:11.016 }, 00:10:11.016 { 00:10:11.016 "dma_device_id": "system", 00:10:11.016 "dma_device_type": 1 00:10:11.016 }, 00:10:11.016 { 00:10:11.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.016 "dma_device_type": 2 00:10:11.016 }, 00:10:11.016 { 00:10:11.016 "dma_device_id": "system", 00:10:11.016 "dma_device_type": 1 00:10:11.016 }, 00:10:11.016 { 00:10:11.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.016 "dma_device_type": 2 00:10:11.016 }, 00:10:11.016 { 00:10:11.016 "dma_device_id": "system", 00:10:11.016 "dma_device_type": 1 00:10:11.016 }, 00:10:11.016 { 00:10:11.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.016 "dma_device_type": 2 00:10:11.016 } 00:10:11.016 ], 00:10:11.016 "driver_specific": { 00:10:11.016 "raid": { 00:10:11.016 "uuid": "6a1d0bcc-db91-42fa-a4a3-90d6fa9e19ac", 00:10:11.016 "strip_size_kb": 64, 00:10:11.016 "state": "online", 00:10:11.016 "raid_level": "raid0", 00:10:11.016 "superblock": true, 00:10:11.016 "num_base_bdevs": 4, 00:10:11.016 "num_base_bdevs_discovered": 4, 00:10:11.016 "num_base_bdevs_operational": 4, 00:10:11.016 "base_bdevs_list": [ 00:10:11.016 { 00:10:11.016 "name": "NewBaseBdev", 00:10:11.016 "uuid": "27c26c7d-c489-4609-8757-4272986a47a5", 00:10:11.016 "is_configured": true, 00:10:11.016 "data_offset": 2048, 00:10:11.016 "data_size": 63488 00:10:11.016 }, 00:10:11.016 { 00:10:11.016 "name": "BaseBdev2", 00:10:11.016 "uuid": "9f891e8b-3e26-4e53-a1df-523cbf2c0e47", 00:10:11.016 "is_configured": true, 00:10:11.016 "data_offset": 2048, 00:10:11.016 "data_size": 63488 00:10:11.016 }, 00:10:11.016 { 00:10:11.016 "name": "BaseBdev3", 00:10:11.016 "uuid": "d28a933f-6c94-4989-8b8f-fd20263b69c8", 00:10:11.016 "is_configured": true, 00:10:11.016 "data_offset": 2048, 00:10:11.016 "data_size": 63488 00:10:11.016 }, 00:10:11.016 { 00:10:11.016 "name": "BaseBdev4", 00:10:11.016 "uuid": "314b081b-e6fa-4227-8be6-e78d7a2b7fec", 00:10:11.016 "is_configured": true, 00:10:11.016 "data_offset": 2048, 00:10:11.016 "data_size": 63488 00:10:11.016 } 00:10:11.016 ] 00:10:11.016 } 00:10:11.016 } 00:10:11.016 }' 00:10:11.016 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:11.016 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:11.016 BaseBdev2 00:10:11.016 BaseBdev3 00:10:11.016 BaseBdev4' 00:10:11.016 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.016 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:11.016 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.016 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:11.016 12:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.016 12:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.016 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.016 12:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.016 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.016 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.016 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.016 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.016 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:11.017 12:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.017 12:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.017 12:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.017 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.017 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.017 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.017 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:11.017 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.017 12:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.017 12:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.017 12:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.017 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.017 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.017 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.017 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:11.017 12:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.017 12:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.280 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.280 12:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.280 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.280 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.280 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:11.280 12:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.280 12:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.280 [2024-11-19 12:30:16.326569] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:11.280 [2024-11-19 12:30:16.326615] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.280 [2024-11-19 12:30:16.326720] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.280 [2024-11-19 12:30:16.326812] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:11.280 [2024-11-19 12:30:16.326826] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:11.280 12:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.280 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81182 00:10:11.280 12:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 81182 ']' 00:10:11.280 12:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 81182 00:10:11.280 12:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:11.280 12:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:11.280 12:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81182 00:10:11.280 12:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:11.280 12:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:11.280 killing process with pid 81182 00:10:11.280 12:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81182' 00:10:11.280 12:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 81182 00:10:11.280 [2024-11-19 12:30:16.375960] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:11.280 12:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 81182 00:10:11.280 [2024-11-19 12:30:16.419331] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:11.575 12:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:11.575 00:10:11.575 real 0m9.774s 00:10:11.575 user 0m16.641s 00:10:11.575 sys 0m2.089s 00:10:11.575 12:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:11.575 12:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.575 ************************************ 00:10:11.575 END TEST raid_state_function_test_sb 00:10:11.575 ************************************ 00:10:11.575 12:30:16 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:11.575 12:30:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:11.575 12:30:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:11.575 12:30:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:11.575 ************************************ 00:10:11.575 START TEST raid_superblock_test 00:10:11.575 ************************************ 00:10:11.575 12:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:10:11.575 12:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:11.575 12:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:11.575 12:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:11.576 12:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:11.576 12:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:11.576 12:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:11.576 12:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:11.576 12:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:11.576 12:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:11.576 12:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:11.576 12:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:11.576 12:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:11.576 12:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:11.576 12:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:11.576 12:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:11.576 12:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:11.576 12:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81836 00:10:11.576 12:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:11.576 12:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81836 00:10:11.576 12:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 81836 ']' 00:10:11.576 12:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.576 12:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:11.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.576 12:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.576 12:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:11.576 12:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.835 [2024-11-19 12:30:16.858714] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:11.835 [2024-11-19 12:30:16.858890] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81836 ] 00:10:11.835 [2024-11-19 12:30:17.025949] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.835 [2024-11-19 12:30:17.077569] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.094 [2024-11-19 12:30:17.123158] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.094 [2024-11-19 12:30:17.123204] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.661 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:12.661 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:12.661 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:12.661 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:12.661 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:12.661 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:12.661 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:12.661 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:12.661 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:12.661 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:12.661 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:12.661 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.661 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.661 malloc1 00:10:12.661 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.661 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:12.661 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.661 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.661 [2024-11-19 12:30:17.755975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:12.661 [2024-11-19 12:30:17.756045] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.662 [2024-11-19 12:30:17.756068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:12.662 [2024-11-19 12:30:17.756082] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.662 [2024-11-19 12:30:17.758243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.662 [2024-11-19 12:30:17.758286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:12.662 pt1 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.662 malloc2 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.662 [2024-11-19 12:30:17.794856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:12.662 [2024-11-19 12:30:17.794925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.662 [2024-11-19 12:30:17.794949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:12.662 [2024-11-19 12:30:17.794965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.662 [2024-11-19 12:30:17.798007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.662 [2024-11-19 12:30:17.798052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:12.662 pt2 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.662 malloc3 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.662 [2024-11-19 12:30:17.824293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:12.662 [2024-11-19 12:30:17.824350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.662 [2024-11-19 12:30:17.824374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:12.662 [2024-11-19 12:30:17.824387] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.662 [2024-11-19 12:30:17.826868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.662 [2024-11-19 12:30:17.826905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:12.662 pt3 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.662 malloc4 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.662 [2024-11-19 12:30:17.853703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:12.662 [2024-11-19 12:30:17.853773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.662 [2024-11-19 12:30:17.853793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:12.662 [2024-11-19 12:30:17.853808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.662 [2024-11-19 12:30:17.856273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.662 [2024-11-19 12:30:17.856312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:12.662 pt4 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.662 [2024-11-19 12:30:17.865779] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:12.662 [2024-11-19 12:30:17.867969] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:12.662 [2024-11-19 12:30:17.868040] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:12.662 [2024-11-19 12:30:17.868115] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:12.662 [2024-11-19 12:30:17.868302] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:12.662 [2024-11-19 12:30:17.868327] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:12.662 [2024-11-19 12:30:17.868647] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:12.662 [2024-11-19 12:30:17.868841] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:12.662 [2024-11-19 12:30:17.868860] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:12.662 [2024-11-19 12:30:17.869013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:12.662 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:12.663 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:12.663 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.663 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.663 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.663 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.663 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.663 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.663 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.663 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.663 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.663 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.663 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.663 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.923 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.923 "name": "raid_bdev1", 00:10:12.923 "uuid": "5da63cac-3bf2-4d20-aad7-37b45d8089ad", 00:10:12.923 "strip_size_kb": 64, 00:10:12.923 "state": "online", 00:10:12.923 "raid_level": "raid0", 00:10:12.923 "superblock": true, 00:10:12.923 "num_base_bdevs": 4, 00:10:12.923 "num_base_bdevs_discovered": 4, 00:10:12.923 "num_base_bdevs_operational": 4, 00:10:12.923 "base_bdevs_list": [ 00:10:12.923 { 00:10:12.923 "name": "pt1", 00:10:12.923 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:12.923 "is_configured": true, 00:10:12.923 "data_offset": 2048, 00:10:12.923 "data_size": 63488 00:10:12.923 }, 00:10:12.923 { 00:10:12.923 "name": "pt2", 00:10:12.923 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:12.923 "is_configured": true, 00:10:12.923 "data_offset": 2048, 00:10:12.923 "data_size": 63488 00:10:12.923 }, 00:10:12.923 { 00:10:12.923 "name": "pt3", 00:10:12.923 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:12.923 "is_configured": true, 00:10:12.923 "data_offset": 2048, 00:10:12.923 "data_size": 63488 00:10:12.923 }, 00:10:12.923 { 00:10:12.923 "name": "pt4", 00:10:12.923 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:12.923 "is_configured": true, 00:10:12.923 "data_offset": 2048, 00:10:12.923 "data_size": 63488 00:10:12.923 } 00:10:12.923 ] 00:10:12.924 }' 00:10:12.924 12:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.924 12:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.183 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:13.183 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:13.183 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:13.183 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:13.183 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:13.183 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:13.183 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:13.183 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:13.183 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.183 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.183 [2024-11-19 12:30:18.309385] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.183 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.183 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:13.183 "name": "raid_bdev1", 00:10:13.183 "aliases": [ 00:10:13.183 "5da63cac-3bf2-4d20-aad7-37b45d8089ad" 00:10:13.183 ], 00:10:13.183 "product_name": "Raid Volume", 00:10:13.183 "block_size": 512, 00:10:13.183 "num_blocks": 253952, 00:10:13.183 "uuid": "5da63cac-3bf2-4d20-aad7-37b45d8089ad", 00:10:13.183 "assigned_rate_limits": { 00:10:13.183 "rw_ios_per_sec": 0, 00:10:13.183 "rw_mbytes_per_sec": 0, 00:10:13.183 "r_mbytes_per_sec": 0, 00:10:13.183 "w_mbytes_per_sec": 0 00:10:13.183 }, 00:10:13.183 "claimed": false, 00:10:13.183 "zoned": false, 00:10:13.183 "supported_io_types": { 00:10:13.183 "read": true, 00:10:13.183 "write": true, 00:10:13.183 "unmap": true, 00:10:13.183 "flush": true, 00:10:13.183 "reset": true, 00:10:13.183 "nvme_admin": false, 00:10:13.183 "nvme_io": false, 00:10:13.183 "nvme_io_md": false, 00:10:13.183 "write_zeroes": true, 00:10:13.183 "zcopy": false, 00:10:13.183 "get_zone_info": false, 00:10:13.183 "zone_management": false, 00:10:13.183 "zone_append": false, 00:10:13.183 "compare": false, 00:10:13.183 "compare_and_write": false, 00:10:13.183 "abort": false, 00:10:13.183 "seek_hole": false, 00:10:13.183 "seek_data": false, 00:10:13.183 "copy": false, 00:10:13.183 "nvme_iov_md": false 00:10:13.183 }, 00:10:13.183 "memory_domains": [ 00:10:13.183 { 00:10:13.183 "dma_device_id": "system", 00:10:13.183 "dma_device_type": 1 00:10:13.183 }, 00:10:13.183 { 00:10:13.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.183 "dma_device_type": 2 00:10:13.183 }, 00:10:13.183 { 00:10:13.183 "dma_device_id": "system", 00:10:13.183 "dma_device_type": 1 00:10:13.183 }, 00:10:13.183 { 00:10:13.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.183 "dma_device_type": 2 00:10:13.183 }, 00:10:13.183 { 00:10:13.183 "dma_device_id": "system", 00:10:13.183 "dma_device_type": 1 00:10:13.183 }, 00:10:13.183 { 00:10:13.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.183 "dma_device_type": 2 00:10:13.183 }, 00:10:13.183 { 00:10:13.183 "dma_device_id": "system", 00:10:13.183 "dma_device_type": 1 00:10:13.183 }, 00:10:13.183 { 00:10:13.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.183 "dma_device_type": 2 00:10:13.183 } 00:10:13.183 ], 00:10:13.183 "driver_specific": { 00:10:13.183 "raid": { 00:10:13.183 "uuid": "5da63cac-3bf2-4d20-aad7-37b45d8089ad", 00:10:13.183 "strip_size_kb": 64, 00:10:13.183 "state": "online", 00:10:13.183 "raid_level": "raid0", 00:10:13.183 "superblock": true, 00:10:13.183 "num_base_bdevs": 4, 00:10:13.183 "num_base_bdevs_discovered": 4, 00:10:13.183 "num_base_bdevs_operational": 4, 00:10:13.183 "base_bdevs_list": [ 00:10:13.183 { 00:10:13.183 "name": "pt1", 00:10:13.183 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:13.183 "is_configured": true, 00:10:13.184 "data_offset": 2048, 00:10:13.184 "data_size": 63488 00:10:13.184 }, 00:10:13.184 { 00:10:13.184 "name": "pt2", 00:10:13.184 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:13.184 "is_configured": true, 00:10:13.184 "data_offset": 2048, 00:10:13.184 "data_size": 63488 00:10:13.184 }, 00:10:13.184 { 00:10:13.184 "name": "pt3", 00:10:13.184 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:13.184 "is_configured": true, 00:10:13.184 "data_offset": 2048, 00:10:13.184 "data_size": 63488 00:10:13.184 }, 00:10:13.184 { 00:10:13.184 "name": "pt4", 00:10:13.184 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:13.184 "is_configured": true, 00:10:13.184 "data_offset": 2048, 00:10:13.184 "data_size": 63488 00:10:13.184 } 00:10:13.184 ] 00:10:13.184 } 00:10:13.184 } 00:10:13.184 }' 00:10:13.184 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:13.184 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:13.184 pt2 00:10:13.184 pt3 00:10:13.184 pt4' 00:10:13.184 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:13.443 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.444 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.444 [2024-11-19 12:30:18.649102] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.444 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.444 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5da63cac-3bf2-4d20-aad7-37b45d8089ad 00:10:13.444 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5da63cac-3bf2-4d20-aad7-37b45d8089ad ']' 00:10:13.444 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:13.444 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.444 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.444 [2024-11-19 12:30:18.692706] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:13.444 [2024-11-19 12:30:18.692762] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:13.444 [2024-11-19 12:30:18.692865] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:13.444 [2024-11-19 12:30:18.692959] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:13.444 [2024-11-19 12:30:18.692973] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:10:13.444 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.444 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.444 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.444 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.444 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:13.703 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.703 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:13.703 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:13.703 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:13.703 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:13.703 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.703 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.703 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.703 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:13.703 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:13.703 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.703 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.703 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.703 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:13.703 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:13.703 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.703 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.703 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.703 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:13.703 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:13.703 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.703 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.703 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.704 [2024-11-19 12:30:18.840491] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:13.704 [2024-11-19 12:30:18.842843] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:13.704 [2024-11-19 12:30:18.842954] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:13.704 [2024-11-19 12:30:18.843023] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:13.704 [2024-11-19 12:30:18.843108] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:13.704 [2024-11-19 12:30:18.843207] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:13.704 [2024-11-19 12:30:18.843294] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:13.704 [2024-11-19 12:30:18.843368] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:13.704 [2024-11-19 12:30:18.843431] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:13.704 [2024-11-19 12:30:18.843473] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:10:13.704 request: 00:10:13.704 { 00:10:13.704 "name": "raid_bdev1", 00:10:13.704 "raid_level": "raid0", 00:10:13.704 "base_bdevs": [ 00:10:13.704 "malloc1", 00:10:13.704 "malloc2", 00:10:13.704 "malloc3", 00:10:13.704 "malloc4" 00:10:13.704 ], 00:10:13.704 "strip_size_kb": 64, 00:10:13.704 "superblock": false, 00:10:13.704 "method": "bdev_raid_create", 00:10:13.704 "req_id": 1 00:10:13.704 } 00:10:13.704 Got JSON-RPC error response 00:10:13.704 response: 00:10:13.704 { 00:10:13.704 "code": -17, 00:10:13.704 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:13.704 } 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.704 [2024-11-19 12:30:18.900323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:13.704 [2024-11-19 12:30:18.900429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.704 [2024-11-19 12:30:18.900458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:13.704 [2024-11-19 12:30:18.900469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.704 [2024-11-19 12:30:18.903056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.704 [2024-11-19 12:30:18.903099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:13.704 [2024-11-19 12:30:18.903185] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:13.704 [2024-11-19 12:30:18.903237] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:13.704 pt1 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.704 "name": "raid_bdev1", 00:10:13.704 "uuid": "5da63cac-3bf2-4d20-aad7-37b45d8089ad", 00:10:13.704 "strip_size_kb": 64, 00:10:13.704 "state": "configuring", 00:10:13.704 "raid_level": "raid0", 00:10:13.704 "superblock": true, 00:10:13.704 "num_base_bdevs": 4, 00:10:13.704 "num_base_bdevs_discovered": 1, 00:10:13.704 "num_base_bdevs_operational": 4, 00:10:13.704 "base_bdevs_list": [ 00:10:13.704 { 00:10:13.704 "name": "pt1", 00:10:13.704 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:13.704 "is_configured": true, 00:10:13.704 "data_offset": 2048, 00:10:13.704 "data_size": 63488 00:10:13.704 }, 00:10:13.704 { 00:10:13.704 "name": null, 00:10:13.704 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:13.704 "is_configured": false, 00:10:13.704 "data_offset": 2048, 00:10:13.704 "data_size": 63488 00:10:13.704 }, 00:10:13.704 { 00:10:13.704 "name": null, 00:10:13.704 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:13.704 "is_configured": false, 00:10:13.704 "data_offset": 2048, 00:10:13.704 "data_size": 63488 00:10:13.704 }, 00:10:13.704 { 00:10:13.704 "name": null, 00:10:13.704 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:13.704 "is_configured": false, 00:10:13.704 "data_offset": 2048, 00:10:13.704 "data_size": 63488 00:10:13.704 } 00:10:13.704 ] 00:10:13.704 }' 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.704 12:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.273 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:14.273 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:14.273 12:30:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.273 12:30:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.273 [2024-11-19 12:30:19.363948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:14.273 [2024-11-19 12:30:19.364096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.273 [2024-11-19 12:30:19.364152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:14.273 [2024-11-19 12:30:19.364198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.273 [2024-11-19 12:30:19.364734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.273 [2024-11-19 12:30:19.364813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:14.273 [2024-11-19 12:30:19.364952] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:14.273 [2024-11-19 12:30:19.365013] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:14.273 pt2 00:10:14.273 12:30:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.273 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:14.273 12:30:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.273 12:30:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.273 [2024-11-19 12:30:19.375924] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:14.273 12:30:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.273 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:14.273 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:14.273 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.273 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.273 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.273 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.273 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.273 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.273 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.273 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.273 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.273 12:30:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.273 12:30:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.273 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:14.273 12:30:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.273 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.273 "name": "raid_bdev1", 00:10:14.273 "uuid": "5da63cac-3bf2-4d20-aad7-37b45d8089ad", 00:10:14.273 "strip_size_kb": 64, 00:10:14.273 "state": "configuring", 00:10:14.273 "raid_level": "raid0", 00:10:14.273 "superblock": true, 00:10:14.273 "num_base_bdevs": 4, 00:10:14.273 "num_base_bdevs_discovered": 1, 00:10:14.274 "num_base_bdevs_operational": 4, 00:10:14.274 "base_bdevs_list": [ 00:10:14.274 { 00:10:14.274 "name": "pt1", 00:10:14.274 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:14.274 "is_configured": true, 00:10:14.274 "data_offset": 2048, 00:10:14.274 "data_size": 63488 00:10:14.274 }, 00:10:14.274 { 00:10:14.274 "name": null, 00:10:14.274 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:14.274 "is_configured": false, 00:10:14.274 "data_offset": 0, 00:10:14.274 "data_size": 63488 00:10:14.274 }, 00:10:14.274 { 00:10:14.274 "name": null, 00:10:14.274 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:14.274 "is_configured": false, 00:10:14.274 "data_offset": 2048, 00:10:14.274 "data_size": 63488 00:10:14.274 }, 00:10:14.274 { 00:10:14.274 "name": null, 00:10:14.274 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:14.274 "is_configured": false, 00:10:14.274 "data_offset": 2048, 00:10:14.274 "data_size": 63488 00:10:14.274 } 00:10:14.274 ] 00:10:14.274 }' 00:10:14.274 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.274 12:30:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.842 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:14.842 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:14.842 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:14.842 12:30:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.842 12:30:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.842 [2024-11-19 12:30:19.839552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:14.842 [2024-11-19 12:30:19.839643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.842 [2024-11-19 12:30:19.839666] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:14.842 [2024-11-19 12:30:19.839679] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.842 [2024-11-19 12:30:19.840174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.842 [2024-11-19 12:30:19.840200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:14.842 [2024-11-19 12:30:19.840283] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:14.842 [2024-11-19 12:30:19.840310] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:14.842 pt2 00:10:14.842 12:30:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.842 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:14.842 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:14.842 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:14.842 12:30:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.842 12:30:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.842 [2024-11-19 12:30:19.851436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:14.842 [2024-11-19 12:30:19.851500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.842 [2024-11-19 12:30:19.851520] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:14.842 [2024-11-19 12:30:19.851531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.842 [2024-11-19 12:30:19.851895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.842 [2024-11-19 12:30:19.851915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:14.842 [2024-11-19 12:30:19.851979] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:14.842 [2024-11-19 12:30:19.852000] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:14.842 pt3 00:10:14.842 12:30:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.842 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:14.842 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:14.842 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:14.842 12:30:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.842 12:30:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.842 [2024-11-19 12:30:19.863436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:14.842 [2024-11-19 12:30:19.863503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.842 [2024-11-19 12:30:19.863523] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:14.842 [2024-11-19 12:30:19.863534] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.842 [2024-11-19 12:30:19.863911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.842 [2024-11-19 12:30:19.863935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:14.842 [2024-11-19 12:30:19.864004] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:14.842 [2024-11-19 12:30:19.864031] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:14.842 [2024-11-19 12:30:19.864146] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:14.842 [2024-11-19 12:30:19.864169] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:14.843 [2024-11-19 12:30:19.864447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:14.843 [2024-11-19 12:30:19.864592] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:14.843 [2024-11-19 12:30:19.864603] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:10:14.843 [2024-11-19 12:30:19.864724] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:14.843 pt4 00:10:14.843 12:30:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.843 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:14.843 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:14.843 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:14.843 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:14.843 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:14.843 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.843 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.843 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.843 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.843 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.843 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.843 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.843 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.843 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:14.843 12:30:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.843 12:30:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.843 12:30:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.843 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.843 "name": "raid_bdev1", 00:10:14.843 "uuid": "5da63cac-3bf2-4d20-aad7-37b45d8089ad", 00:10:14.843 "strip_size_kb": 64, 00:10:14.843 "state": "online", 00:10:14.843 "raid_level": "raid0", 00:10:14.843 "superblock": true, 00:10:14.843 "num_base_bdevs": 4, 00:10:14.843 "num_base_bdevs_discovered": 4, 00:10:14.843 "num_base_bdevs_operational": 4, 00:10:14.843 "base_bdevs_list": [ 00:10:14.843 { 00:10:14.843 "name": "pt1", 00:10:14.843 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:14.843 "is_configured": true, 00:10:14.843 "data_offset": 2048, 00:10:14.843 "data_size": 63488 00:10:14.843 }, 00:10:14.843 { 00:10:14.843 "name": "pt2", 00:10:14.843 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:14.843 "is_configured": true, 00:10:14.843 "data_offset": 2048, 00:10:14.843 "data_size": 63488 00:10:14.843 }, 00:10:14.843 { 00:10:14.843 "name": "pt3", 00:10:14.843 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:14.843 "is_configured": true, 00:10:14.843 "data_offset": 2048, 00:10:14.843 "data_size": 63488 00:10:14.843 }, 00:10:14.843 { 00:10:14.843 "name": "pt4", 00:10:14.843 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:14.843 "is_configured": true, 00:10:14.843 "data_offset": 2048, 00:10:14.843 "data_size": 63488 00:10:14.843 } 00:10:14.843 ] 00:10:14.843 }' 00:10:14.843 12:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.843 12:30:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.103 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:15.103 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:15.103 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:15.103 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:15.103 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:15.103 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:15.103 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:15.103 12:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.103 12:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.103 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:15.103 [2024-11-19 12:30:20.319133] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.103 12:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.103 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:15.103 "name": "raid_bdev1", 00:10:15.103 "aliases": [ 00:10:15.103 "5da63cac-3bf2-4d20-aad7-37b45d8089ad" 00:10:15.103 ], 00:10:15.103 "product_name": "Raid Volume", 00:10:15.103 "block_size": 512, 00:10:15.103 "num_blocks": 253952, 00:10:15.103 "uuid": "5da63cac-3bf2-4d20-aad7-37b45d8089ad", 00:10:15.103 "assigned_rate_limits": { 00:10:15.103 "rw_ios_per_sec": 0, 00:10:15.103 "rw_mbytes_per_sec": 0, 00:10:15.103 "r_mbytes_per_sec": 0, 00:10:15.103 "w_mbytes_per_sec": 0 00:10:15.103 }, 00:10:15.103 "claimed": false, 00:10:15.103 "zoned": false, 00:10:15.103 "supported_io_types": { 00:10:15.103 "read": true, 00:10:15.103 "write": true, 00:10:15.103 "unmap": true, 00:10:15.103 "flush": true, 00:10:15.103 "reset": true, 00:10:15.103 "nvme_admin": false, 00:10:15.103 "nvme_io": false, 00:10:15.103 "nvme_io_md": false, 00:10:15.103 "write_zeroes": true, 00:10:15.103 "zcopy": false, 00:10:15.103 "get_zone_info": false, 00:10:15.103 "zone_management": false, 00:10:15.103 "zone_append": false, 00:10:15.103 "compare": false, 00:10:15.103 "compare_and_write": false, 00:10:15.103 "abort": false, 00:10:15.103 "seek_hole": false, 00:10:15.103 "seek_data": false, 00:10:15.103 "copy": false, 00:10:15.103 "nvme_iov_md": false 00:10:15.103 }, 00:10:15.103 "memory_domains": [ 00:10:15.103 { 00:10:15.103 "dma_device_id": "system", 00:10:15.103 "dma_device_type": 1 00:10:15.103 }, 00:10:15.103 { 00:10:15.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.103 "dma_device_type": 2 00:10:15.103 }, 00:10:15.103 { 00:10:15.103 "dma_device_id": "system", 00:10:15.103 "dma_device_type": 1 00:10:15.103 }, 00:10:15.103 { 00:10:15.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.103 "dma_device_type": 2 00:10:15.103 }, 00:10:15.103 { 00:10:15.103 "dma_device_id": "system", 00:10:15.103 "dma_device_type": 1 00:10:15.103 }, 00:10:15.103 { 00:10:15.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.103 "dma_device_type": 2 00:10:15.103 }, 00:10:15.103 { 00:10:15.103 "dma_device_id": "system", 00:10:15.103 "dma_device_type": 1 00:10:15.103 }, 00:10:15.103 { 00:10:15.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.103 "dma_device_type": 2 00:10:15.103 } 00:10:15.103 ], 00:10:15.103 "driver_specific": { 00:10:15.103 "raid": { 00:10:15.103 "uuid": "5da63cac-3bf2-4d20-aad7-37b45d8089ad", 00:10:15.103 "strip_size_kb": 64, 00:10:15.103 "state": "online", 00:10:15.103 "raid_level": "raid0", 00:10:15.103 "superblock": true, 00:10:15.103 "num_base_bdevs": 4, 00:10:15.103 "num_base_bdevs_discovered": 4, 00:10:15.103 "num_base_bdevs_operational": 4, 00:10:15.103 "base_bdevs_list": [ 00:10:15.103 { 00:10:15.103 "name": "pt1", 00:10:15.103 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:15.103 "is_configured": true, 00:10:15.103 "data_offset": 2048, 00:10:15.103 "data_size": 63488 00:10:15.103 }, 00:10:15.103 { 00:10:15.103 "name": "pt2", 00:10:15.103 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:15.103 "is_configured": true, 00:10:15.103 "data_offset": 2048, 00:10:15.103 "data_size": 63488 00:10:15.103 }, 00:10:15.103 { 00:10:15.103 "name": "pt3", 00:10:15.103 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:15.103 "is_configured": true, 00:10:15.103 "data_offset": 2048, 00:10:15.103 "data_size": 63488 00:10:15.103 }, 00:10:15.103 { 00:10:15.103 "name": "pt4", 00:10:15.103 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:15.103 "is_configured": true, 00:10:15.103 "data_offset": 2048, 00:10:15.103 "data_size": 63488 00:10:15.103 } 00:10:15.103 ] 00:10:15.103 } 00:10:15.103 } 00:10:15.103 }' 00:10:15.103 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:15.363 pt2 00:10:15.363 pt3 00:10:15.363 pt4' 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.363 12:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.623 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.623 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.623 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:15.623 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:15.624 12:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.624 12:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.624 [2024-11-19 12:30:20.646544] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.624 12:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.624 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5da63cac-3bf2-4d20-aad7-37b45d8089ad '!=' 5da63cac-3bf2-4d20-aad7-37b45d8089ad ']' 00:10:15.624 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:15.624 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:15.624 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:15.624 12:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81836 00:10:15.624 12:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 81836 ']' 00:10:15.624 12:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 81836 00:10:15.624 12:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:15.624 12:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:15.624 12:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81836 00:10:15.624 12:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:15.624 12:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:15.624 12:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81836' 00:10:15.624 killing process with pid 81836 00:10:15.624 12:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 81836 00:10:15.624 [2024-11-19 12:30:20.729089] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:15.624 [2024-11-19 12:30:20.729185] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.624 [2024-11-19 12:30:20.729255] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:15.624 [2024-11-19 12:30:20.729267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:10:15.624 12:30:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 81836 00:10:15.624 [2024-11-19 12:30:20.773800] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:15.884 12:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:15.884 00:10:15.884 real 0m4.260s 00:10:15.884 user 0m6.694s 00:10:15.884 sys 0m0.945s 00:10:15.884 12:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:15.884 ************************************ 00:10:15.884 END TEST raid_superblock_test 00:10:15.884 ************************************ 00:10:15.884 12:30:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.884 12:30:21 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:15.884 12:30:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:15.884 12:30:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:15.884 12:30:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:15.884 ************************************ 00:10:15.884 START TEST raid_read_error_test 00:10:15.884 ************************************ 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.FHPDAd5Tck 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=82084 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 82084 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 82084 ']' 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:15.884 12:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.144 [2024-11-19 12:30:21.204182] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:16.144 [2024-11-19 12:30:21.204340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82084 ] 00:10:16.144 [2024-11-19 12:30:21.385936] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.404 [2024-11-19 12:30:21.434434] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.404 [2024-11-19 12:30:21.477180] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.404 [2024-11-19 12:30:21.477222] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.974 BaseBdev1_malloc 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.974 true 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.974 [2024-11-19 12:30:22.068130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:16.974 [2024-11-19 12:30:22.068190] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.974 [2024-11-19 12:30:22.068214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:16.974 [2024-11-19 12:30:22.068226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.974 [2024-11-19 12:30:22.070337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.974 [2024-11-19 12:30:22.070379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:16.974 BaseBdev1 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.974 BaseBdev2_malloc 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.974 true 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.974 [2024-11-19 12:30:22.125240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:16.974 [2024-11-19 12:30:22.125312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.974 [2024-11-19 12:30:22.125341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:16.974 [2024-11-19 12:30:22.125355] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.974 [2024-11-19 12:30:22.128674] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.974 [2024-11-19 12:30:22.128716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:16.974 BaseBdev2 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.974 BaseBdev3_malloc 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.974 true 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.974 [2024-11-19 12:30:22.165919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:16.974 [2024-11-19 12:30:22.165962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.974 [2024-11-19 12:30:22.165981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:16.974 [2024-11-19 12:30:22.165990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.974 [2024-11-19 12:30:22.168030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.974 [2024-11-19 12:30:22.168067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:16.974 BaseBdev3 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.974 BaseBdev4_malloc 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:16.974 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.975 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.975 true 00:10:16.975 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.975 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:16.975 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.975 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.975 [2024-11-19 12:30:22.206366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:16.975 [2024-11-19 12:30:22.206464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.975 [2024-11-19 12:30:22.206488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:16.975 [2024-11-19 12:30:22.206497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.975 [2024-11-19 12:30:22.208504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.975 [2024-11-19 12:30:22.208544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:16.975 BaseBdev4 00:10:16.975 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.975 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:16.975 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.975 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.975 [2024-11-19 12:30:22.218402] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:16.975 [2024-11-19 12:30:22.220199] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:16.975 [2024-11-19 12:30:22.220296] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:16.975 [2024-11-19 12:30:22.220347] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:16.975 [2024-11-19 12:30:22.220531] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:16.975 [2024-11-19 12:30:22.220547] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:16.975 [2024-11-19 12:30:22.220801] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:16.975 [2024-11-19 12:30:22.220944] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:16.975 [2024-11-19 12:30:22.220961] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:16.975 [2024-11-19 12:30:22.221079] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.975 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.975 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:16.975 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.975 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.975 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.975 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.975 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.975 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.975 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.975 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.975 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.975 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.235 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.235 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.235 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.235 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.235 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.235 "name": "raid_bdev1", 00:10:17.235 "uuid": "f5fb2829-a677-4377-b5e8-7bcdc23aab97", 00:10:17.235 "strip_size_kb": 64, 00:10:17.235 "state": "online", 00:10:17.235 "raid_level": "raid0", 00:10:17.235 "superblock": true, 00:10:17.235 "num_base_bdevs": 4, 00:10:17.235 "num_base_bdevs_discovered": 4, 00:10:17.235 "num_base_bdevs_operational": 4, 00:10:17.235 "base_bdevs_list": [ 00:10:17.235 { 00:10:17.235 "name": "BaseBdev1", 00:10:17.235 "uuid": "397ecdc2-264f-578b-9493-6becb7dc36f7", 00:10:17.235 "is_configured": true, 00:10:17.235 "data_offset": 2048, 00:10:17.235 "data_size": 63488 00:10:17.235 }, 00:10:17.235 { 00:10:17.235 "name": "BaseBdev2", 00:10:17.235 "uuid": "682e353f-d5d6-5735-bf65-5353d461e7f0", 00:10:17.235 "is_configured": true, 00:10:17.235 "data_offset": 2048, 00:10:17.235 "data_size": 63488 00:10:17.235 }, 00:10:17.235 { 00:10:17.235 "name": "BaseBdev3", 00:10:17.235 "uuid": "63513b5b-9a87-5ff1-83e8-db4655936f4d", 00:10:17.235 "is_configured": true, 00:10:17.235 "data_offset": 2048, 00:10:17.235 "data_size": 63488 00:10:17.235 }, 00:10:17.235 { 00:10:17.235 "name": "BaseBdev4", 00:10:17.235 "uuid": "2bab634e-de98-5641-8823-9e39cb765035", 00:10:17.235 "is_configured": true, 00:10:17.235 "data_offset": 2048, 00:10:17.235 "data_size": 63488 00:10:17.235 } 00:10:17.235 ] 00:10:17.235 }' 00:10:17.235 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.235 12:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.494 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:17.494 12:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:17.494 [2024-11-19 12:30:22.737929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:18.431 12:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:18.431 12:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.431 12:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.431 12:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.431 12:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:18.431 12:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:18.431 12:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:18.431 12:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:18.431 12:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.431 12:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.431 12:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.431 12:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.431 12:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.431 12:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.431 12:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.431 12:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.431 12:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.431 12:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.431 12:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.431 12:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.431 12:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.431 12:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.691 12:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.691 "name": "raid_bdev1", 00:10:18.691 "uuid": "f5fb2829-a677-4377-b5e8-7bcdc23aab97", 00:10:18.691 "strip_size_kb": 64, 00:10:18.691 "state": "online", 00:10:18.691 "raid_level": "raid0", 00:10:18.691 "superblock": true, 00:10:18.691 "num_base_bdevs": 4, 00:10:18.691 "num_base_bdevs_discovered": 4, 00:10:18.691 "num_base_bdevs_operational": 4, 00:10:18.691 "base_bdevs_list": [ 00:10:18.691 { 00:10:18.691 "name": "BaseBdev1", 00:10:18.691 "uuid": "397ecdc2-264f-578b-9493-6becb7dc36f7", 00:10:18.691 "is_configured": true, 00:10:18.691 "data_offset": 2048, 00:10:18.691 "data_size": 63488 00:10:18.691 }, 00:10:18.691 { 00:10:18.691 "name": "BaseBdev2", 00:10:18.691 "uuid": "682e353f-d5d6-5735-bf65-5353d461e7f0", 00:10:18.691 "is_configured": true, 00:10:18.692 "data_offset": 2048, 00:10:18.692 "data_size": 63488 00:10:18.692 }, 00:10:18.692 { 00:10:18.692 "name": "BaseBdev3", 00:10:18.692 "uuid": "63513b5b-9a87-5ff1-83e8-db4655936f4d", 00:10:18.692 "is_configured": true, 00:10:18.692 "data_offset": 2048, 00:10:18.692 "data_size": 63488 00:10:18.692 }, 00:10:18.692 { 00:10:18.692 "name": "BaseBdev4", 00:10:18.692 "uuid": "2bab634e-de98-5641-8823-9e39cb765035", 00:10:18.692 "is_configured": true, 00:10:18.692 "data_offset": 2048, 00:10:18.692 "data_size": 63488 00:10:18.692 } 00:10:18.692 ] 00:10:18.692 }' 00:10:18.692 12:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.692 12:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.960 12:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:18.960 12:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.960 12:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.960 [2024-11-19 12:30:24.142102] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:18.960 [2024-11-19 12:30:24.142138] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:18.960 [2024-11-19 12:30:24.144702] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:18.960 [2024-11-19 12:30:24.144750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.960 [2024-11-19 12:30:24.144906] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:18.960 [2024-11-19 12:30:24.144949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:18.960 { 00:10:18.960 "results": [ 00:10:18.960 { 00:10:18.960 "job": "raid_bdev1", 00:10:18.960 "core_mask": "0x1", 00:10:18.960 "workload": "randrw", 00:10:18.960 "percentage": 50, 00:10:18.960 "status": "finished", 00:10:18.960 "queue_depth": 1, 00:10:18.960 "io_size": 131072, 00:10:18.960 "runtime": 1.404855, 00:10:18.960 "iops": 16172.487552096123, 00:10:18.960 "mibps": 2021.5609440120154, 00:10:18.960 "io_failed": 1, 00:10:18.960 "io_timeout": 0, 00:10:18.960 "avg_latency_us": 85.87902963401304, 00:10:18.960 "min_latency_us": 26.717903930131005, 00:10:18.960 "max_latency_us": 1366.5257641921398 00:10:18.960 } 00:10:18.960 ], 00:10:18.960 "core_count": 1 00:10:18.960 } 00:10:18.960 12:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.960 12:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 82084 00:10:18.960 12:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 82084 ']' 00:10:18.960 12:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 82084 00:10:18.960 12:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:18.960 12:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:18.960 12:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82084 00:10:18.960 12:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:18.960 12:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:18.960 12:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82084' 00:10:18.960 killing process with pid 82084 00:10:18.960 12:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 82084 00:10:18.960 [2024-11-19 12:30:24.192876] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:18.960 12:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 82084 00:10:19.232 [2024-11-19 12:30:24.229877] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:19.232 12:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:19.232 12:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.FHPDAd5Tck 00:10:19.232 12:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:19.232 12:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:10:19.232 12:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:19.232 12:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:19.232 12:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:19.232 12:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:10:19.232 00:10:19.232 real 0m3.390s 00:10:19.232 user 0m4.214s 00:10:19.232 sys 0m0.627s 00:10:19.232 12:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.232 12:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.232 ************************************ 00:10:19.232 END TEST raid_read_error_test 00:10:19.232 ************************************ 00:10:19.492 12:30:24 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:19.492 12:30:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:19.492 12:30:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.492 12:30:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:19.492 ************************************ 00:10:19.492 START TEST raid_write_error_test 00:10:19.492 ************************************ 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Bw6h5vjnDM 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=82224 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 82224 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 82224 ']' 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:19.492 12:30:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.492 [2024-11-19 12:30:24.647158] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:19.492 [2024-11-19 12:30:24.647361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82224 ] 00:10:19.752 [2024-11-19 12:30:24.805962] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.752 [2024-11-19 12:30:24.852697] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.752 [2024-11-19 12:30:24.895936] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.752 [2024-11-19 12:30:24.895991] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.321 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:20.321 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:20.321 12:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:20.321 12:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:20.321 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.321 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.321 BaseBdev1_malloc 00:10:20.321 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.321 12:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:20.321 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.321 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.321 true 00:10:20.321 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.321 12:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:20.321 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.321 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.321 [2024-11-19 12:30:25.566438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:20.321 [2024-11-19 12:30:25.566586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.321 [2024-11-19 12:30:25.566623] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:20.321 [2024-11-19 12:30:25.566653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.321 [2024-11-19 12:30:25.568772] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.321 [2024-11-19 12:30:25.568844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:20.321 BaseBdev1 00:10:20.321 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.321 12:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:20.321 12:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:20.321 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.321 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.582 BaseBdev2_malloc 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.582 true 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.582 [2024-11-19 12:30:25.613888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:20.582 [2024-11-19 12:30:25.614014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.582 [2024-11-19 12:30:25.614048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:20.582 [2024-11-19 12:30:25.614079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.582 [2024-11-19 12:30:25.616119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.582 [2024-11-19 12:30:25.616191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:20.582 BaseBdev2 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.582 BaseBdev3_malloc 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.582 true 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.582 [2024-11-19 12:30:25.650537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:20.582 [2024-11-19 12:30:25.650651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.582 [2024-11-19 12:30:25.650685] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:20.582 [2024-11-19 12:30:25.650719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.582 [2024-11-19 12:30:25.652741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.582 [2024-11-19 12:30:25.652824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:20.582 BaseBdev3 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.582 BaseBdev4_malloc 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.582 true 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.582 [2024-11-19 12:30:25.691142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:20.582 [2024-11-19 12:30:25.691263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.582 [2024-11-19 12:30:25.691304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:20.582 [2024-11-19 12:30:25.691337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.582 [2024-11-19 12:30:25.693427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.582 [2024-11-19 12:30:25.693500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:20.582 BaseBdev4 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.582 [2024-11-19 12:30:25.703179] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:20.582 [2024-11-19 12:30:25.705096] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:20.582 [2024-11-19 12:30:25.705189] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:20.582 [2024-11-19 12:30:25.705246] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:20.582 [2024-11-19 12:30:25.705441] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:20.582 [2024-11-19 12:30:25.705453] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:20.582 [2024-11-19 12:30:25.705705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:20.582 [2024-11-19 12:30:25.705856] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:20.582 [2024-11-19 12:30:25.705879] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:20.582 [2024-11-19 12:30:25.706004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.582 12:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.582 "name": "raid_bdev1", 00:10:20.582 "uuid": "e2f1367a-5a79-4685-9ba3-029d77095344", 00:10:20.582 "strip_size_kb": 64, 00:10:20.582 "state": "online", 00:10:20.582 "raid_level": "raid0", 00:10:20.582 "superblock": true, 00:10:20.582 "num_base_bdevs": 4, 00:10:20.582 "num_base_bdevs_discovered": 4, 00:10:20.582 "num_base_bdevs_operational": 4, 00:10:20.582 "base_bdevs_list": [ 00:10:20.582 { 00:10:20.582 "name": "BaseBdev1", 00:10:20.582 "uuid": "a01b05ee-86c8-5b65-a5ec-9f451c44148b", 00:10:20.582 "is_configured": true, 00:10:20.582 "data_offset": 2048, 00:10:20.582 "data_size": 63488 00:10:20.582 }, 00:10:20.582 { 00:10:20.582 "name": "BaseBdev2", 00:10:20.582 "uuid": "00890ae4-b568-5ea0-a389-bc37b7e78b11", 00:10:20.582 "is_configured": true, 00:10:20.582 "data_offset": 2048, 00:10:20.582 "data_size": 63488 00:10:20.582 }, 00:10:20.582 { 00:10:20.582 "name": "BaseBdev3", 00:10:20.582 "uuid": "23876920-59ef-5e5d-a1aa-59b168e55a65", 00:10:20.582 "is_configured": true, 00:10:20.582 "data_offset": 2048, 00:10:20.582 "data_size": 63488 00:10:20.582 }, 00:10:20.582 { 00:10:20.582 "name": "BaseBdev4", 00:10:20.582 "uuid": "0c6e295a-06b3-54ba-9c5c-395be1c41997", 00:10:20.582 "is_configured": true, 00:10:20.582 "data_offset": 2048, 00:10:20.583 "data_size": 63488 00:10:20.583 } 00:10:20.583 ] 00:10:20.583 }' 00:10:20.583 12:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.583 12:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.152 12:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:21.152 12:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:21.152 [2024-11-19 12:30:26.206776] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:22.091 12:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:22.091 12:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.091 12:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.091 12:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.091 12:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:22.091 12:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:22.091 12:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:22.091 12:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:22.091 12:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.091 12:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.091 12:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.091 12:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.091 12:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.091 12:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.091 12:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.091 12:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.091 12:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.091 12:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.091 12:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.091 12:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.091 12:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.091 12:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.091 12:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.091 "name": "raid_bdev1", 00:10:22.091 "uuid": "e2f1367a-5a79-4685-9ba3-029d77095344", 00:10:22.091 "strip_size_kb": 64, 00:10:22.091 "state": "online", 00:10:22.091 "raid_level": "raid0", 00:10:22.091 "superblock": true, 00:10:22.091 "num_base_bdevs": 4, 00:10:22.091 "num_base_bdevs_discovered": 4, 00:10:22.091 "num_base_bdevs_operational": 4, 00:10:22.091 "base_bdevs_list": [ 00:10:22.091 { 00:10:22.091 "name": "BaseBdev1", 00:10:22.091 "uuid": "a01b05ee-86c8-5b65-a5ec-9f451c44148b", 00:10:22.091 "is_configured": true, 00:10:22.091 "data_offset": 2048, 00:10:22.091 "data_size": 63488 00:10:22.091 }, 00:10:22.091 { 00:10:22.091 "name": "BaseBdev2", 00:10:22.091 "uuid": "00890ae4-b568-5ea0-a389-bc37b7e78b11", 00:10:22.091 "is_configured": true, 00:10:22.091 "data_offset": 2048, 00:10:22.091 "data_size": 63488 00:10:22.091 }, 00:10:22.091 { 00:10:22.091 "name": "BaseBdev3", 00:10:22.091 "uuid": "23876920-59ef-5e5d-a1aa-59b168e55a65", 00:10:22.091 "is_configured": true, 00:10:22.091 "data_offset": 2048, 00:10:22.091 "data_size": 63488 00:10:22.091 }, 00:10:22.091 { 00:10:22.091 "name": "BaseBdev4", 00:10:22.091 "uuid": "0c6e295a-06b3-54ba-9c5c-395be1c41997", 00:10:22.091 "is_configured": true, 00:10:22.091 "data_offset": 2048, 00:10:22.091 "data_size": 63488 00:10:22.091 } 00:10:22.091 ] 00:10:22.091 }' 00:10:22.091 12:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.091 12:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.350 12:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:22.350 12:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.350 12:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.350 [2024-11-19 12:30:27.587065] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:22.350 [2024-11-19 12:30:27.587109] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:22.350 [2024-11-19 12:30:27.589792] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.350 [2024-11-19 12:30:27.589879] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.350 [2024-11-19 12:30:27.589944] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.350 [2024-11-19 12:30:27.589987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:22.350 { 00:10:22.350 "results": [ 00:10:22.350 { 00:10:22.350 "job": "raid_bdev1", 00:10:22.350 "core_mask": "0x1", 00:10:22.350 "workload": "randrw", 00:10:22.350 "percentage": 50, 00:10:22.350 "status": "finished", 00:10:22.350 "queue_depth": 1, 00:10:22.350 "io_size": 131072, 00:10:22.350 "runtime": 1.380941, 00:10:22.350 "iops": 16099.167162101785, 00:10:22.350 "mibps": 2012.3958952627231, 00:10:22.350 "io_failed": 1, 00:10:22.350 "io_timeout": 0, 00:10:22.350 "avg_latency_us": 86.30250772043681, 00:10:22.350 "min_latency_us": 26.717903930131005, 00:10:22.350 "max_latency_us": 1423.7624454148472 00:10:22.350 } 00:10:22.350 ], 00:10:22.350 "core_count": 1 00:10:22.350 } 00:10:22.350 12:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.350 12:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 82224 00:10:22.350 12:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 82224 ']' 00:10:22.350 12:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 82224 00:10:22.350 12:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:22.350 12:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:22.350 12:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82224 00:10:22.609 killing process with pid 82224 00:10:22.609 12:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:22.609 12:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:22.609 12:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82224' 00:10:22.609 12:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 82224 00:10:22.609 [2024-11-19 12:30:27.637277] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:22.609 12:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 82224 00:10:22.609 [2024-11-19 12:30:27.672780] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:22.869 12:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Bw6h5vjnDM 00:10:22.869 12:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:22.869 12:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:22.869 12:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:22.869 12:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:22.869 ************************************ 00:10:22.869 END TEST raid_write_error_test 00:10:22.869 ************************************ 00:10:22.869 12:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:22.869 12:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:22.869 12:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:22.869 00:10:22.869 real 0m3.378s 00:10:22.869 user 0m4.224s 00:10:22.869 sys 0m0.590s 00:10:22.869 12:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:22.869 12:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.869 12:30:27 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:22.869 12:30:27 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:22.869 12:30:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:22.869 12:30:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:22.869 12:30:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:22.869 ************************************ 00:10:22.869 START TEST raid_state_function_test 00:10:22.869 ************************************ 00:10:22.869 12:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:10:22.869 12:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:22.869 12:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:22.869 12:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82351 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82351' 00:10:22.869 Process raid pid: 82351 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82351 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 82351 ']' 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:22.869 12:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.869 [2024-11-19 12:30:28.100478] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:22.869 [2024-11-19 12:30:28.100765] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.128 [2024-11-19 12:30:28.267190] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.128 [2024-11-19 12:30:28.315629] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.128 [2024-11-19 12:30:28.359320] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.128 [2024-11-19 12:30:28.359440] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.696 12:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:23.696 12:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:23.696 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:23.696 12:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.696 12:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.696 [2024-11-19 12:30:28.953578] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:23.696 [2024-11-19 12:30:28.953656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:23.696 [2024-11-19 12:30:28.953678] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:23.696 [2024-11-19 12:30:28.953690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:23.696 [2024-11-19 12:30:28.953696] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:23.696 [2024-11-19 12:30:28.953708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:23.696 [2024-11-19 12:30:28.953715] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:23.696 [2024-11-19 12:30:28.953726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:23.956 12:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.956 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:23.956 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.956 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.956 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:23.956 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.956 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.956 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.956 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.956 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.956 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.956 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.956 12:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.956 12:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.956 12:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.956 12:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.956 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.956 "name": "Existed_Raid", 00:10:23.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.956 "strip_size_kb": 64, 00:10:23.956 "state": "configuring", 00:10:23.956 "raid_level": "concat", 00:10:23.956 "superblock": false, 00:10:23.956 "num_base_bdevs": 4, 00:10:23.956 "num_base_bdevs_discovered": 0, 00:10:23.956 "num_base_bdevs_operational": 4, 00:10:23.956 "base_bdevs_list": [ 00:10:23.956 { 00:10:23.956 "name": "BaseBdev1", 00:10:23.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.956 "is_configured": false, 00:10:23.956 "data_offset": 0, 00:10:23.956 "data_size": 0 00:10:23.956 }, 00:10:23.956 { 00:10:23.956 "name": "BaseBdev2", 00:10:23.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.956 "is_configured": false, 00:10:23.956 "data_offset": 0, 00:10:23.956 "data_size": 0 00:10:23.956 }, 00:10:23.956 { 00:10:23.956 "name": "BaseBdev3", 00:10:23.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.956 "is_configured": false, 00:10:23.956 "data_offset": 0, 00:10:23.956 "data_size": 0 00:10:23.956 }, 00:10:23.956 { 00:10:23.956 "name": "BaseBdev4", 00:10:23.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.956 "is_configured": false, 00:10:23.956 "data_offset": 0, 00:10:23.956 "data_size": 0 00:10:23.956 } 00:10:23.956 ] 00:10:23.956 }' 00:10:23.956 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.956 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.215 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:24.215 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.215 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.215 [2024-11-19 12:30:29.372797] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:24.215 [2024-11-19 12:30:29.372945] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:24.215 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.215 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:24.215 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.215 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.215 [2024-11-19 12:30:29.384803] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:24.215 [2024-11-19 12:30:29.384900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:24.215 [2024-11-19 12:30:29.384928] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:24.215 [2024-11-19 12:30:29.384952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:24.215 [2024-11-19 12:30:29.384970] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:24.215 [2024-11-19 12:30:29.384991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:24.215 [2024-11-19 12:30:29.385009] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:24.215 [2024-11-19 12:30:29.385032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:24.215 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.215 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:24.215 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.215 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.215 [2024-11-19 12:30:29.405912] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:24.215 BaseBdev1 00:10:24.215 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.215 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:24.215 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:24.215 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:24.215 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:24.215 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:24.215 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:24.215 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:24.215 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.215 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.215 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.215 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:24.215 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.215 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.215 [ 00:10:24.215 { 00:10:24.215 "name": "BaseBdev1", 00:10:24.215 "aliases": [ 00:10:24.215 "989b6242-cda5-49f6-a7f4-2ba44fdf1c40" 00:10:24.215 ], 00:10:24.215 "product_name": "Malloc disk", 00:10:24.215 "block_size": 512, 00:10:24.215 "num_blocks": 65536, 00:10:24.215 "uuid": "989b6242-cda5-49f6-a7f4-2ba44fdf1c40", 00:10:24.215 "assigned_rate_limits": { 00:10:24.215 "rw_ios_per_sec": 0, 00:10:24.215 "rw_mbytes_per_sec": 0, 00:10:24.215 "r_mbytes_per_sec": 0, 00:10:24.215 "w_mbytes_per_sec": 0 00:10:24.215 }, 00:10:24.215 "claimed": true, 00:10:24.215 "claim_type": "exclusive_write", 00:10:24.215 "zoned": false, 00:10:24.215 "supported_io_types": { 00:10:24.215 "read": true, 00:10:24.215 "write": true, 00:10:24.215 "unmap": true, 00:10:24.215 "flush": true, 00:10:24.215 "reset": true, 00:10:24.215 "nvme_admin": false, 00:10:24.215 "nvme_io": false, 00:10:24.215 "nvme_io_md": false, 00:10:24.215 "write_zeroes": true, 00:10:24.215 "zcopy": true, 00:10:24.215 "get_zone_info": false, 00:10:24.215 "zone_management": false, 00:10:24.215 "zone_append": false, 00:10:24.215 "compare": false, 00:10:24.215 "compare_and_write": false, 00:10:24.215 "abort": true, 00:10:24.215 "seek_hole": false, 00:10:24.215 "seek_data": false, 00:10:24.215 "copy": true, 00:10:24.215 "nvme_iov_md": false 00:10:24.215 }, 00:10:24.215 "memory_domains": [ 00:10:24.215 { 00:10:24.215 "dma_device_id": "system", 00:10:24.215 "dma_device_type": 1 00:10:24.215 }, 00:10:24.215 { 00:10:24.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.215 "dma_device_type": 2 00:10:24.215 } 00:10:24.215 ], 00:10:24.215 "driver_specific": {} 00:10:24.215 } 00:10:24.215 ] 00:10:24.216 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.216 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:24.216 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:24.216 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.216 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.216 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:24.216 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.216 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.216 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.216 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.216 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.216 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.216 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.216 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.216 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.216 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.216 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.475 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.475 "name": "Existed_Raid", 00:10:24.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.475 "strip_size_kb": 64, 00:10:24.475 "state": "configuring", 00:10:24.475 "raid_level": "concat", 00:10:24.475 "superblock": false, 00:10:24.475 "num_base_bdevs": 4, 00:10:24.475 "num_base_bdevs_discovered": 1, 00:10:24.475 "num_base_bdevs_operational": 4, 00:10:24.475 "base_bdevs_list": [ 00:10:24.475 { 00:10:24.475 "name": "BaseBdev1", 00:10:24.475 "uuid": "989b6242-cda5-49f6-a7f4-2ba44fdf1c40", 00:10:24.475 "is_configured": true, 00:10:24.475 "data_offset": 0, 00:10:24.475 "data_size": 65536 00:10:24.475 }, 00:10:24.475 { 00:10:24.475 "name": "BaseBdev2", 00:10:24.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.475 "is_configured": false, 00:10:24.475 "data_offset": 0, 00:10:24.475 "data_size": 0 00:10:24.475 }, 00:10:24.475 { 00:10:24.475 "name": "BaseBdev3", 00:10:24.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.475 "is_configured": false, 00:10:24.475 "data_offset": 0, 00:10:24.475 "data_size": 0 00:10:24.475 }, 00:10:24.475 { 00:10:24.475 "name": "BaseBdev4", 00:10:24.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.475 "is_configured": false, 00:10:24.475 "data_offset": 0, 00:10:24.475 "data_size": 0 00:10:24.475 } 00:10:24.475 ] 00:10:24.475 }' 00:10:24.475 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.475 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.735 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:24.735 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.735 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.735 [2024-11-19 12:30:29.933097] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:24.735 [2024-11-19 12:30:29.933164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:24.735 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.735 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:24.735 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.735 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.735 [2024-11-19 12:30:29.945104] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:24.735 [2024-11-19 12:30:29.946980] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:24.735 [2024-11-19 12:30:29.947111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:24.735 [2024-11-19 12:30:29.947125] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:24.735 [2024-11-19 12:30:29.947134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:24.735 [2024-11-19 12:30:29.947141] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:24.735 [2024-11-19 12:30:29.947149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:24.735 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.735 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:24.735 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:24.735 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:24.735 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.735 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.735 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:24.735 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.735 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.735 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.735 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.735 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.735 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.735 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.735 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.735 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.735 12:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.735 12:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.994 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.994 "name": "Existed_Raid", 00:10:24.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.994 "strip_size_kb": 64, 00:10:24.994 "state": "configuring", 00:10:24.994 "raid_level": "concat", 00:10:24.994 "superblock": false, 00:10:24.994 "num_base_bdevs": 4, 00:10:24.994 "num_base_bdevs_discovered": 1, 00:10:24.994 "num_base_bdevs_operational": 4, 00:10:24.994 "base_bdevs_list": [ 00:10:24.994 { 00:10:24.994 "name": "BaseBdev1", 00:10:24.994 "uuid": "989b6242-cda5-49f6-a7f4-2ba44fdf1c40", 00:10:24.994 "is_configured": true, 00:10:24.994 "data_offset": 0, 00:10:24.994 "data_size": 65536 00:10:24.994 }, 00:10:24.994 { 00:10:24.994 "name": "BaseBdev2", 00:10:24.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.994 "is_configured": false, 00:10:24.994 "data_offset": 0, 00:10:24.994 "data_size": 0 00:10:24.994 }, 00:10:24.994 { 00:10:24.994 "name": "BaseBdev3", 00:10:24.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.994 "is_configured": false, 00:10:24.994 "data_offset": 0, 00:10:24.994 "data_size": 0 00:10:24.994 }, 00:10:24.994 { 00:10:24.994 "name": "BaseBdev4", 00:10:24.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.994 "is_configured": false, 00:10:24.994 "data_offset": 0, 00:10:24.994 "data_size": 0 00:10:24.994 } 00:10:24.994 ] 00:10:24.994 }' 00:10:24.994 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.994 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.253 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:25.253 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.253 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.253 [2024-11-19 12:30:30.384459] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:25.253 BaseBdev2 00:10:25.253 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.253 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:25.253 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:25.253 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:25.254 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:25.254 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:25.254 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:25.254 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:25.254 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.254 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.254 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.254 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:25.254 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.254 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.254 [ 00:10:25.254 { 00:10:25.254 "name": "BaseBdev2", 00:10:25.254 "aliases": [ 00:10:25.254 "9921adc3-c262-476a-8825-803904aa8e42" 00:10:25.254 ], 00:10:25.254 "product_name": "Malloc disk", 00:10:25.254 "block_size": 512, 00:10:25.254 "num_blocks": 65536, 00:10:25.254 "uuid": "9921adc3-c262-476a-8825-803904aa8e42", 00:10:25.254 "assigned_rate_limits": { 00:10:25.254 "rw_ios_per_sec": 0, 00:10:25.254 "rw_mbytes_per_sec": 0, 00:10:25.254 "r_mbytes_per_sec": 0, 00:10:25.254 "w_mbytes_per_sec": 0 00:10:25.254 }, 00:10:25.254 "claimed": true, 00:10:25.254 "claim_type": "exclusive_write", 00:10:25.254 "zoned": false, 00:10:25.254 "supported_io_types": { 00:10:25.254 "read": true, 00:10:25.254 "write": true, 00:10:25.254 "unmap": true, 00:10:25.254 "flush": true, 00:10:25.254 "reset": true, 00:10:25.254 "nvme_admin": false, 00:10:25.254 "nvme_io": false, 00:10:25.254 "nvme_io_md": false, 00:10:25.254 "write_zeroes": true, 00:10:25.254 "zcopy": true, 00:10:25.254 "get_zone_info": false, 00:10:25.254 "zone_management": false, 00:10:25.254 "zone_append": false, 00:10:25.254 "compare": false, 00:10:25.254 "compare_and_write": false, 00:10:25.254 "abort": true, 00:10:25.254 "seek_hole": false, 00:10:25.254 "seek_data": false, 00:10:25.254 "copy": true, 00:10:25.254 "nvme_iov_md": false 00:10:25.254 }, 00:10:25.254 "memory_domains": [ 00:10:25.254 { 00:10:25.254 "dma_device_id": "system", 00:10:25.254 "dma_device_type": 1 00:10:25.254 }, 00:10:25.254 { 00:10:25.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.254 "dma_device_type": 2 00:10:25.254 } 00:10:25.254 ], 00:10:25.254 "driver_specific": {} 00:10:25.254 } 00:10:25.254 ] 00:10:25.254 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.254 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:25.254 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:25.254 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:25.254 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:25.254 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.254 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.254 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.254 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.254 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.254 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.254 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.254 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.254 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.254 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.254 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.254 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.254 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.254 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.254 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.254 "name": "Existed_Raid", 00:10:25.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.254 "strip_size_kb": 64, 00:10:25.254 "state": "configuring", 00:10:25.254 "raid_level": "concat", 00:10:25.254 "superblock": false, 00:10:25.254 "num_base_bdevs": 4, 00:10:25.254 "num_base_bdevs_discovered": 2, 00:10:25.254 "num_base_bdevs_operational": 4, 00:10:25.254 "base_bdevs_list": [ 00:10:25.254 { 00:10:25.254 "name": "BaseBdev1", 00:10:25.254 "uuid": "989b6242-cda5-49f6-a7f4-2ba44fdf1c40", 00:10:25.254 "is_configured": true, 00:10:25.254 "data_offset": 0, 00:10:25.254 "data_size": 65536 00:10:25.254 }, 00:10:25.254 { 00:10:25.254 "name": "BaseBdev2", 00:10:25.254 "uuid": "9921adc3-c262-476a-8825-803904aa8e42", 00:10:25.254 "is_configured": true, 00:10:25.254 "data_offset": 0, 00:10:25.254 "data_size": 65536 00:10:25.254 }, 00:10:25.254 { 00:10:25.254 "name": "BaseBdev3", 00:10:25.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.254 "is_configured": false, 00:10:25.254 "data_offset": 0, 00:10:25.254 "data_size": 0 00:10:25.254 }, 00:10:25.254 { 00:10:25.254 "name": "BaseBdev4", 00:10:25.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.254 "is_configured": false, 00:10:25.254 "data_offset": 0, 00:10:25.254 "data_size": 0 00:10:25.254 } 00:10:25.254 ] 00:10:25.254 }' 00:10:25.254 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.254 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.825 [2024-11-19 12:30:30.890875] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:25.825 BaseBdev3 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.825 [ 00:10:25.825 { 00:10:25.825 "name": "BaseBdev3", 00:10:25.825 "aliases": [ 00:10:25.825 "d505f38e-36d0-4f12-bedf-c54014c173d7" 00:10:25.825 ], 00:10:25.825 "product_name": "Malloc disk", 00:10:25.825 "block_size": 512, 00:10:25.825 "num_blocks": 65536, 00:10:25.825 "uuid": "d505f38e-36d0-4f12-bedf-c54014c173d7", 00:10:25.825 "assigned_rate_limits": { 00:10:25.825 "rw_ios_per_sec": 0, 00:10:25.825 "rw_mbytes_per_sec": 0, 00:10:25.825 "r_mbytes_per_sec": 0, 00:10:25.825 "w_mbytes_per_sec": 0 00:10:25.825 }, 00:10:25.825 "claimed": true, 00:10:25.825 "claim_type": "exclusive_write", 00:10:25.825 "zoned": false, 00:10:25.825 "supported_io_types": { 00:10:25.825 "read": true, 00:10:25.825 "write": true, 00:10:25.825 "unmap": true, 00:10:25.825 "flush": true, 00:10:25.825 "reset": true, 00:10:25.825 "nvme_admin": false, 00:10:25.825 "nvme_io": false, 00:10:25.825 "nvme_io_md": false, 00:10:25.825 "write_zeroes": true, 00:10:25.825 "zcopy": true, 00:10:25.825 "get_zone_info": false, 00:10:25.825 "zone_management": false, 00:10:25.825 "zone_append": false, 00:10:25.825 "compare": false, 00:10:25.825 "compare_and_write": false, 00:10:25.825 "abort": true, 00:10:25.825 "seek_hole": false, 00:10:25.825 "seek_data": false, 00:10:25.825 "copy": true, 00:10:25.825 "nvme_iov_md": false 00:10:25.825 }, 00:10:25.825 "memory_domains": [ 00:10:25.825 { 00:10:25.825 "dma_device_id": "system", 00:10:25.825 "dma_device_type": 1 00:10:25.825 }, 00:10:25.825 { 00:10:25.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.825 "dma_device_type": 2 00:10:25.825 } 00:10:25.825 ], 00:10:25.825 "driver_specific": {} 00:10:25.825 } 00:10:25.825 ] 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.825 "name": "Existed_Raid", 00:10:25.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.825 "strip_size_kb": 64, 00:10:25.825 "state": "configuring", 00:10:25.825 "raid_level": "concat", 00:10:25.825 "superblock": false, 00:10:25.825 "num_base_bdevs": 4, 00:10:25.825 "num_base_bdevs_discovered": 3, 00:10:25.825 "num_base_bdevs_operational": 4, 00:10:25.825 "base_bdevs_list": [ 00:10:25.825 { 00:10:25.825 "name": "BaseBdev1", 00:10:25.825 "uuid": "989b6242-cda5-49f6-a7f4-2ba44fdf1c40", 00:10:25.825 "is_configured": true, 00:10:25.825 "data_offset": 0, 00:10:25.825 "data_size": 65536 00:10:25.825 }, 00:10:25.825 { 00:10:25.825 "name": "BaseBdev2", 00:10:25.825 "uuid": "9921adc3-c262-476a-8825-803904aa8e42", 00:10:25.825 "is_configured": true, 00:10:25.825 "data_offset": 0, 00:10:25.825 "data_size": 65536 00:10:25.825 }, 00:10:25.825 { 00:10:25.825 "name": "BaseBdev3", 00:10:25.825 "uuid": "d505f38e-36d0-4f12-bedf-c54014c173d7", 00:10:25.825 "is_configured": true, 00:10:25.825 "data_offset": 0, 00:10:25.825 "data_size": 65536 00:10:25.825 }, 00:10:25.825 { 00:10:25.825 "name": "BaseBdev4", 00:10:25.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.825 "is_configured": false, 00:10:25.825 "data_offset": 0, 00:10:25.825 "data_size": 0 00:10:25.825 } 00:10:25.825 ] 00:10:25.825 }' 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.825 12:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.396 12:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:26.396 12:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.396 12:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.396 [2024-11-19 12:30:31.389360] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:26.396 [2024-11-19 12:30:31.389513] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:26.396 [2024-11-19 12:30:31.389526] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:26.396 [2024-11-19 12:30:31.389854] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:26.396 [2024-11-19 12:30:31.390020] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:26.396 [2024-11-19 12:30:31.390034] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:26.396 [2024-11-19 12:30:31.390239] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.396 BaseBdev4 00:10:26.396 12:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.396 12:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:26.396 12:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:26.396 12:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:26.396 12:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:26.396 12:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:26.396 12:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:26.396 12:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:26.396 12:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.396 12:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.396 12:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.396 12:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:26.396 12:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.396 12:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.396 [ 00:10:26.396 { 00:10:26.396 "name": "BaseBdev4", 00:10:26.396 "aliases": [ 00:10:26.396 "2dd038cb-1b35-4f66-a5bb-274f43781ac7" 00:10:26.396 ], 00:10:26.396 "product_name": "Malloc disk", 00:10:26.396 "block_size": 512, 00:10:26.396 "num_blocks": 65536, 00:10:26.396 "uuid": "2dd038cb-1b35-4f66-a5bb-274f43781ac7", 00:10:26.396 "assigned_rate_limits": { 00:10:26.396 "rw_ios_per_sec": 0, 00:10:26.396 "rw_mbytes_per_sec": 0, 00:10:26.396 "r_mbytes_per_sec": 0, 00:10:26.396 "w_mbytes_per_sec": 0 00:10:26.396 }, 00:10:26.396 "claimed": true, 00:10:26.396 "claim_type": "exclusive_write", 00:10:26.396 "zoned": false, 00:10:26.396 "supported_io_types": { 00:10:26.396 "read": true, 00:10:26.396 "write": true, 00:10:26.396 "unmap": true, 00:10:26.396 "flush": true, 00:10:26.396 "reset": true, 00:10:26.396 "nvme_admin": false, 00:10:26.396 "nvme_io": false, 00:10:26.396 "nvme_io_md": false, 00:10:26.396 "write_zeroes": true, 00:10:26.396 "zcopy": true, 00:10:26.396 "get_zone_info": false, 00:10:26.396 "zone_management": false, 00:10:26.396 "zone_append": false, 00:10:26.396 "compare": false, 00:10:26.396 "compare_and_write": false, 00:10:26.396 "abort": true, 00:10:26.396 "seek_hole": false, 00:10:26.396 "seek_data": false, 00:10:26.396 "copy": true, 00:10:26.396 "nvme_iov_md": false 00:10:26.396 }, 00:10:26.396 "memory_domains": [ 00:10:26.396 { 00:10:26.396 "dma_device_id": "system", 00:10:26.396 "dma_device_type": 1 00:10:26.396 }, 00:10:26.396 { 00:10:26.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.396 "dma_device_type": 2 00:10:26.396 } 00:10:26.396 ], 00:10:26.396 "driver_specific": {} 00:10:26.396 } 00:10:26.396 ] 00:10:26.396 12:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.396 12:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:26.396 12:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:26.396 12:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:26.396 12:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:26.396 12:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.396 12:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.396 12:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:26.396 12:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.396 12:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.396 12:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.396 12:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.397 12:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.397 12:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.397 12:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.397 12:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.397 12:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.397 12:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.397 12:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.397 12:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.397 "name": "Existed_Raid", 00:10:26.397 "uuid": "e16bb540-60c7-414b-abce-48c7213839f2", 00:10:26.397 "strip_size_kb": 64, 00:10:26.397 "state": "online", 00:10:26.397 "raid_level": "concat", 00:10:26.397 "superblock": false, 00:10:26.397 "num_base_bdevs": 4, 00:10:26.397 "num_base_bdevs_discovered": 4, 00:10:26.397 "num_base_bdevs_operational": 4, 00:10:26.397 "base_bdevs_list": [ 00:10:26.397 { 00:10:26.397 "name": "BaseBdev1", 00:10:26.397 "uuid": "989b6242-cda5-49f6-a7f4-2ba44fdf1c40", 00:10:26.397 "is_configured": true, 00:10:26.397 "data_offset": 0, 00:10:26.397 "data_size": 65536 00:10:26.397 }, 00:10:26.397 { 00:10:26.397 "name": "BaseBdev2", 00:10:26.397 "uuid": "9921adc3-c262-476a-8825-803904aa8e42", 00:10:26.397 "is_configured": true, 00:10:26.397 "data_offset": 0, 00:10:26.397 "data_size": 65536 00:10:26.397 }, 00:10:26.397 { 00:10:26.397 "name": "BaseBdev3", 00:10:26.397 "uuid": "d505f38e-36d0-4f12-bedf-c54014c173d7", 00:10:26.397 "is_configured": true, 00:10:26.397 "data_offset": 0, 00:10:26.397 "data_size": 65536 00:10:26.397 }, 00:10:26.397 { 00:10:26.397 "name": "BaseBdev4", 00:10:26.397 "uuid": "2dd038cb-1b35-4f66-a5bb-274f43781ac7", 00:10:26.397 "is_configured": true, 00:10:26.397 "data_offset": 0, 00:10:26.397 "data_size": 65536 00:10:26.397 } 00:10:26.397 ] 00:10:26.397 }' 00:10:26.397 12:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.397 12:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.665 12:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:26.665 12:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:26.665 12:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:26.665 12:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:26.665 12:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:26.665 12:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:26.665 12:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:26.665 12:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:26.665 12:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.665 12:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.665 [2024-11-19 12:30:31.904928] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:26.924 12:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.924 12:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:26.924 "name": "Existed_Raid", 00:10:26.924 "aliases": [ 00:10:26.924 "e16bb540-60c7-414b-abce-48c7213839f2" 00:10:26.924 ], 00:10:26.924 "product_name": "Raid Volume", 00:10:26.924 "block_size": 512, 00:10:26.924 "num_blocks": 262144, 00:10:26.924 "uuid": "e16bb540-60c7-414b-abce-48c7213839f2", 00:10:26.924 "assigned_rate_limits": { 00:10:26.924 "rw_ios_per_sec": 0, 00:10:26.924 "rw_mbytes_per_sec": 0, 00:10:26.924 "r_mbytes_per_sec": 0, 00:10:26.924 "w_mbytes_per_sec": 0 00:10:26.924 }, 00:10:26.924 "claimed": false, 00:10:26.924 "zoned": false, 00:10:26.924 "supported_io_types": { 00:10:26.924 "read": true, 00:10:26.924 "write": true, 00:10:26.924 "unmap": true, 00:10:26.924 "flush": true, 00:10:26.924 "reset": true, 00:10:26.924 "nvme_admin": false, 00:10:26.924 "nvme_io": false, 00:10:26.924 "nvme_io_md": false, 00:10:26.924 "write_zeroes": true, 00:10:26.924 "zcopy": false, 00:10:26.924 "get_zone_info": false, 00:10:26.924 "zone_management": false, 00:10:26.924 "zone_append": false, 00:10:26.924 "compare": false, 00:10:26.924 "compare_and_write": false, 00:10:26.924 "abort": false, 00:10:26.924 "seek_hole": false, 00:10:26.924 "seek_data": false, 00:10:26.924 "copy": false, 00:10:26.924 "nvme_iov_md": false 00:10:26.924 }, 00:10:26.924 "memory_domains": [ 00:10:26.924 { 00:10:26.924 "dma_device_id": "system", 00:10:26.924 "dma_device_type": 1 00:10:26.924 }, 00:10:26.924 { 00:10:26.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.924 "dma_device_type": 2 00:10:26.924 }, 00:10:26.924 { 00:10:26.924 "dma_device_id": "system", 00:10:26.924 "dma_device_type": 1 00:10:26.924 }, 00:10:26.924 { 00:10:26.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.924 "dma_device_type": 2 00:10:26.924 }, 00:10:26.924 { 00:10:26.924 "dma_device_id": "system", 00:10:26.924 "dma_device_type": 1 00:10:26.924 }, 00:10:26.924 { 00:10:26.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.924 "dma_device_type": 2 00:10:26.924 }, 00:10:26.924 { 00:10:26.924 "dma_device_id": "system", 00:10:26.924 "dma_device_type": 1 00:10:26.924 }, 00:10:26.924 { 00:10:26.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.924 "dma_device_type": 2 00:10:26.924 } 00:10:26.924 ], 00:10:26.924 "driver_specific": { 00:10:26.924 "raid": { 00:10:26.924 "uuid": "e16bb540-60c7-414b-abce-48c7213839f2", 00:10:26.924 "strip_size_kb": 64, 00:10:26.924 "state": "online", 00:10:26.924 "raid_level": "concat", 00:10:26.924 "superblock": false, 00:10:26.924 "num_base_bdevs": 4, 00:10:26.924 "num_base_bdevs_discovered": 4, 00:10:26.924 "num_base_bdevs_operational": 4, 00:10:26.924 "base_bdevs_list": [ 00:10:26.924 { 00:10:26.924 "name": "BaseBdev1", 00:10:26.924 "uuid": "989b6242-cda5-49f6-a7f4-2ba44fdf1c40", 00:10:26.924 "is_configured": true, 00:10:26.924 "data_offset": 0, 00:10:26.924 "data_size": 65536 00:10:26.924 }, 00:10:26.924 { 00:10:26.924 "name": "BaseBdev2", 00:10:26.924 "uuid": "9921adc3-c262-476a-8825-803904aa8e42", 00:10:26.924 "is_configured": true, 00:10:26.924 "data_offset": 0, 00:10:26.924 "data_size": 65536 00:10:26.924 }, 00:10:26.924 { 00:10:26.924 "name": "BaseBdev3", 00:10:26.924 "uuid": "d505f38e-36d0-4f12-bedf-c54014c173d7", 00:10:26.924 "is_configured": true, 00:10:26.924 "data_offset": 0, 00:10:26.924 "data_size": 65536 00:10:26.924 }, 00:10:26.924 { 00:10:26.924 "name": "BaseBdev4", 00:10:26.924 "uuid": "2dd038cb-1b35-4f66-a5bb-274f43781ac7", 00:10:26.924 "is_configured": true, 00:10:26.924 "data_offset": 0, 00:10:26.924 "data_size": 65536 00:10:26.924 } 00:10:26.924 ] 00:10:26.924 } 00:10:26.924 } 00:10:26.924 }' 00:10:26.924 12:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:26.924 12:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:26.925 BaseBdev2 00:10:26.925 BaseBdev3 00:10:26.925 BaseBdev4' 00:10:26.925 12:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.925 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:26.925 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.925 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:26.925 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.925 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.925 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.925 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.925 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.925 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.925 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.925 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:26.925 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.925 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.925 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.925 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.925 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.925 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.925 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.925 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:26.925 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.925 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.925 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.925 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.925 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.925 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.925 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.925 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:26.925 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.925 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.925 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.183 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.183 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.183 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.183 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:27.183 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.183 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.183 [2024-11-19 12:30:32.232051] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:27.183 [2024-11-19 12:30:32.232095] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:27.183 [2024-11-19 12:30:32.232149] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:27.183 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.183 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:27.183 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:27.183 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:27.183 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:27.183 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:27.183 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:27.183 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.183 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:27.183 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.183 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.183 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.183 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.183 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.183 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.183 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.183 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.183 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.183 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.183 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.183 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.183 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.183 "name": "Existed_Raid", 00:10:27.183 "uuid": "e16bb540-60c7-414b-abce-48c7213839f2", 00:10:27.183 "strip_size_kb": 64, 00:10:27.183 "state": "offline", 00:10:27.183 "raid_level": "concat", 00:10:27.183 "superblock": false, 00:10:27.183 "num_base_bdevs": 4, 00:10:27.183 "num_base_bdevs_discovered": 3, 00:10:27.183 "num_base_bdevs_operational": 3, 00:10:27.183 "base_bdevs_list": [ 00:10:27.183 { 00:10:27.183 "name": null, 00:10:27.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.183 "is_configured": false, 00:10:27.183 "data_offset": 0, 00:10:27.183 "data_size": 65536 00:10:27.184 }, 00:10:27.184 { 00:10:27.184 "name": "BaseBdev2", 00:10:27.184 "uuid": "9921adc3-c262-476a-8825-803904aa8e42", 00:10:27.184 "is_configured": true, 00:10:27.184 "data_offset": 0, 00:10:27.184 "data_size": 65536 00:10:27.184 }, 00:10:27.184 { 00:10:27.184 "name": "BaseBdev3", 00:10:27.184 "uuid": "d505f38e-36d0-4f12-bedf-c54014c173d7", 00:10:27.184 "is_configured": true, 00:10:27.184 "data_offset": 0, 00:10:27.184 "data_size": 65536 00:10:27.184 }, 00:10:27.184 { 00:10:27.184 "name": "BaseBdev4", 00:10:27.184 "uuid": "2dd038cb-1b35-4f66-a5bb-274f43781ac7", 00:10:27.184 "is_configured": true, 00:10:27.184 "data_offset": 0, 00:10:27.184 "data_size": 65536 00:10:27.184 } 00:10:27.184 ] 00:10:27.184 }' 00:10:27.184 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.184 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.442 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:27.442 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:27.442 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:27.442 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.442 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.442 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.442 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.702 [2024-11-19 12:30:32.710925] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.702 [2024-11-19 12:30:32.782057] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.702 [2024-11-19 12:30:32.849200] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:27.702 [2024-11-19 12:30:32.849258] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.702 BaseBdev2 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.702 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.702 [ 00:10:27.702 { 00:10:27.702 "name": "BaseBdev2", 00:10:27.702 "aliases": [ 00:10:27.702 "5d0975d3-4b83-40cd-bfab-df4aa0957893" 00:10:27.702 ], 00:10:27.702 "product_name": "Malloc disk", 00:10:27.702 "block_size": 512, 00:10:27.702 "num_blocks": 65536, 00:10:27.702 "uuid": "5d0975d3-4b83-40cd-bfab-df4aa0957893", 00:10:27.702 "assigned_rate_limits": { 00:10:27.702 "rw_ios_per_sec": 0, 00:10:27.702 "rw_mbytes_per_sec": 0, 00:10:27.702 "r_mbytes_per_sec": 0, 00:10:27.702 "w_mbytes_per_sec": 0 00:10:27.702 }, 00:10:27.703 "claimed": false, 00:10:27.703 "zoned": false, 00:10:27.703 "supported_io_types": { 00:10:27.703 "read": true, 00:10:27.703 "write": true, 00:10:27.703 "unmap": true, 00:10:27.703 "flush": true, 00:10:27.703 "reset": true, 00:10:27.703 "nvme_admin": false, 00:10:27.703 "nvme_io": false, 00:10:27.703 "nvme_io_md": false, 00:10:27.962 "write_zeroes": true, 00:10:27.962 "zcopy": true, 00:10:27.962 "get_zone_info": false, 00:10:27.962 "zone_management": false, 00:10:27.962 "zone_append": false, 00:10:27.962 "compare": false, 00:10:27.962 "compare_and_write": false, 00:10:27.962 "abort": true, 00:10:27.962 "seek_hole": false, 00:10:27.962 "seek_data": false, 00:10:27.962 "copy": true, 00:10:27.962 "nvme_iov_md": false 00:10:27.962 }, 00:10:27.962 "memory_domains": [ 00:10:27.962 { 00:10:27.962 "dma_device_id": "system", 00:10:27.962 "dma_device_type": 1 00:10:27.962 }, 00:10:27.962 { 00:10:27.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.962 "dma_device_type": 2 00:10:27.962 } 00:10:27.962 ], 00:10:27.963 "driver_specific": {} 00:10:27.963 } 00:10:27.963 ] 00:10:27.963 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.963 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:27.963 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:27.963 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:27.963 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:27.963 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.963 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.963 BaseBdev3 00:10:27.963 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.963 12:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:27.963 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:27.963 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:27.963 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:27.963 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:27.963 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:27.963 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:27.963 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.963 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.963 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.963 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:27.963 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.963 12:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.963 [ 00:10:27.963 { 00:10:27.963 "name": "BaseBdev3", 00:10:27.963 "aliases": [ 00:10:27.963 "52885e0f-7f64-436e-87b2-bd7de012553d" 00:10:27.963 ], 00:10:27.963 "product_name": "Malloc disk", 00:10:27.963 "block_size": 512, 00:10:27.963 "num_blocks": 65536, 00:10:27.963 "uuid": "52885e0f-7f64-436e-87b2-bd7de012553d", 00:10:27.963 "assigned_rate_limits": { 00:10:27.963 "rw_ios_per_sec": 0, 00:10:27.963 "rw_mbytes_per_sec": 0, 00:10:27.963 "r_mbytes_per_sec": 0, 00:10:27.963 "w_mbytes_per_sec": 0 00:10:27.963 }, 00:10:27.963 "claimed": false, 00:10:27.963 "zoned": false, 00:10:27.963 "supported_io_types": { 00:10:27.963 "read": true, 00:10:27.963 "write": true, 00:10:27.963 "unmap": true, 00:10:27.963 "flush": true, 00:10:27.963 "reset": true, 00:10:27.963 "nvme_admin": false, 00:10:27.963 "nvme_io": false, 00:10:27.963 "nvme_io_md": false, 00:10:27.963 "write_zeroes": true, 00:10:27.963 "zcopy": true, 00:10:27.963 "get_zone_info": false, 00:10:27.963 "zone_management": false, 00:10:27.963 "zone_append": false, 00:10:27.963 "compare": false, 00:10:27.963 "compare_and_write": false, 00:10:27.963 "abort": true, 00:10:27.963 "seek_hole": false, 00:10:27.963 "seek_data": false, 00:10:27.963 "copy": true, 00:10:27.963 "nvme_iov_md": false 00:10:27.963 }, 00:10:27.963 "memory_domains": [ 00:10:27.963 { 00:10:27.963 "dma_device_id": "system", 00:10:27.963 "dma_device_type": 1 00:10:27.963 }, 00:10:27.963 { 00:10:27.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.963 "dma_device_type": 2 00:10:27.963 } 00:10:27.963 ], 00:10:27.963 "driver_specific": {} 00:10:27.963 } 00:10:27.963 ] 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.963 BaseBdev4 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.963 [ 00:10:27.963 { 00:10:27.963 "name": "BaseBdev4", 00:10:27.963 "aliases": [ 00:10:27.963 "6b69bb76-9157-4540-8f47-b98cd5b1be85" 00:10:27.963 ], 00:10:27.963 "product_name": "Malloc disk", 00:10:27.963 "block_size": 512, 00:10:27.963 "num_blocks": 65536, 00:10:27.963 "uuid": "6b69bb76-9157-4540-8f47-b98cd5b1be85", 00:10:27.963 "assigned_rate_limits": { 00:10:27.963 "rw_ios_per_sec": 0, 00:10:27.963 "rw_mbytes_per_sec": 0, 00:10:27.963 "r_mbytes_per_sec": 0, 00:10:27.963 "w_mbytes_per_sec": 0 00:10:27.963 }, 00:10:27.963 "claimed": false, 00:10:27.963 "zoned": false, 00:10:27.963 "supported_io_types": { 00:10:27.963 "read": true, 00:10:27.963 "write": true, 00:10:27.963 "unmap": true, 00:10:27.963 "flush": true, 00:10:27.963 "reset": true, 00:10:27.963 "nvme_admin": false, 00:10:27.963 "nvme_io": false, 00:10:27.963 "nvme_io_md": false, 00:10:27.963 "write_zeroes": true, 00:10:27.963 "zcopy": true, 00:10:27.963 "get_zone_info": false, 00:10:27.963 "zone_management": false, 00:10:27.963 "zone_append": false, 00:10:27.963 "compare": false, 00:10:27.963 "compare_and_write": false, 00:10:27.963 "abort": true, 00:10:27.963 "seek_hole": false, 00:10:27.963 "seek_data": false, 00:10:27.963 "copy": true, 00:10:27.963 "nvme_iov_md": false 00:10:27.963 }, 00:10:27.963 "memory_domains": [ 00:10:27.963 { 00:10:27.963 "dma_device_id": "system", 00:10:27.963 "dma_device_type": 1 00:10:27.963 }, 00:10:27.963 { 00:10:27.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.963 "dma_device_type": 2 00:10:27.963 } 00:10:27.963 ], 00:10:27.963 "driver_specific": {} 00:10:27.963 } 00:10:27.963 ] 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.963 [2024-11-19 12:30:33.077924] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:27.963 [2024-11-19 12:30:33.078070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:27.963 [2024-11-19 12:30:33.078112] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:27.963 [2024-11-19 12:30:33.080067] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:27.963 [2024-11-19 12:30:33.080164] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.963 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.964 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.964 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.964 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.964 "name": "Existed_Raid", 00:10:27.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.964 "strip_size_kb": 64, 00:10:27.964 "state": "configuring", 00:10:27.964 "raid_level": "concat", 00:10:27.964 "superblock": false, 00:10:27.964 "num_base_bdevs": 4, 00:10:27.964 "num_base_bdevs_discovered": 3, 00:10:27.964 "num_base_bdevs_operational": 4, 00:10:27.964 "base_bdevs_list": [ 00:10:27.964 { 00:10:27.964 "name": "BaseBdev1", 00:10:27.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.964 "is_configured": false, 00:10:27.964 "data_offset": 0, 00:10:27.964 "data_size": 0 00:10:27.964 }, 00:10:27.964 { 00:10:27.964 "name": "BaseBdev2", 00:10:27.964 "uuid": "5d0975d3-4b83-40cd-bfab-df4aa0957893", 00:10:27.964 "is_configured": true, 00:10:27.964 "data_offset": 0, 00:10:27.964 "data_size": 65536 00:10:27.964 }, 00:10:27.964 { 00:10:27.964 "name": "BaseBdev3", 00:10:27.964 "uuid": "52885e0f-7f64-436e-87b2-bd7de012553d", 00:10:27.964 "is_configured": true, 00:10:27.964 "data_offset": 0, 00:10:27.964 "data_size": 65536 00:10:27.964 }, 00:10:27.964 { 00:10:27.964 "name": "BaseBdev4", 00:10:27.964 "uuid": "6b69bb76-9157-4540-8f47-b98cd5b1be85", 00:10:27.964 "is_configured": true, 00:10:27.964 "data_offset": 0, 00:10:27.964 "data_size": 65536 00:10:27.964 } 00:10:27.964 ] 00:10:27.964 }' 00:10:27.964 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.964 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.532 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:28.532 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.532 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.532 [2024-11-19 12:30:33.501201] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:28.532 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.532 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:28.532 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.532 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.532 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:28.532 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.532 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.533 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.533 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.533 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.533 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.533 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.533 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.533 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.533 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.533 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.533 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.533 "name": "Existed_Raid", 00:10:28.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.533 "strip_size_kb": 64, 00:10:28.533 "state": "configuring", 00:10:28.533 "raid_level": "concat", 00:10:28.533 "superblock": false, 00:10:28.533 "num_base_bdevs": 4, 00:10:28.533 "num_base_bdevs_discovered": 2, 00:10:28.533 "num_base_bdevs_operational": 4, 00:10:28.533 "base_bdevs_list": [ 00:10:28.533 { 00:10:28.533 "name": "BaseBdev1", 00:10:28.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.533 "is_configured": false, 00:10:28.533 "data_offset": 0, 00:10:28.533 "data_size": 0 00:10:28.533 }, 00:10:28.533 { 00:10:28.533 "name": null, 00:10:28.533 "uuid": "5d0975d3-4b83-40cd-bfab-df4aa0957893", 00:10:28.533 "is_configured": false, 00:10:28.533 "data_offset": 0, 00:10:28.533 "data_size": 65536 00:10:28.533 }, 00:10:28.533 { 00:10:28.533 "name": "BaseBdev3", 00:10:28.533 "uuid": "52885e0f-7f64-436e-87b2-bd7de012553d", 00:10:28.533 "is_configured": true, 00:10:28.533 "data_offset": 0, 00:10:28.533 "data_size": 65536 00:10:28.533 }, 00:10:28.533 { 00:10:28.533 "name": "BaseBdev4", 00:10:28.533 "uuid": "6b69bb76-9157-4540-8f47-b98cd5b1be85", 00:10:28.533 "is_configured": true, 00:10:28.533 "data_offset": 0, 00:10:28.533 "data_size": 65536 00:10:28.533 } 00:10:28.533 ] 00:10:28.533 }' 00:10:28.533 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.533 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.792 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.792 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:28.792 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.792 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.792 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.792 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:28.792 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:28.792 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.792 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.792 [2024-11-19 12:30:33.979491] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:28.792 BaseBdev1 00:10:28.792 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.792 12:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:28.792 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:28.792 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:28.792 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:28.792 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:28.792 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:28.792 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:28.792 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.792 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.792 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.792 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:28.793 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.793 12:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.793 [ 00:10:28.793 { 00:10:28.793 "name": "BaseBdev1", 00:10:28.793 "aliases": [ 00:10:28.793 "cd9fc2f2-f59e-4724-9504-53e02ba022e1" 00:10:28.793 ], 00:10:28.793 "product_name": "Malloc disk", 00:10:28.793 "block_size": 512, 00:10:28.793 "num_blocks": 65536, 00:10:28.793 "uuid": "cd9fc2f2-f59e-4724-9504-53e02ba022e1", 00:10:28.793 "assigned_rate_limits": { 00:10:28.793 "rw_ios_per_sec": 0, 00:10:28.793 "rw_mbytes_per_sec": 0, 00:10:28.793 "r_mbytes_per_sec": 0, 00:10:28.793 "w_mbytes_per_sec": 0 00:10:28.793 }, 00:10:28.793 "claimed": true, 00:10:28.793 "claim_type": "exclusive_write", 00:10:28.793 "zoned": false, 00:10:28.793 "supported_io_types": { 00:10:28.793 "read": true, 00:10:28.793 "write": true, 00:10:28.793 "unmap": true, 00:10:28.793 "flush": true, 00:10:28.793 "reset": true, 00:10:28.793 "nvme_admin": false, 00:10:28.793 "nvme_io": false, 00:10:28.793 "nvme_io_md": false, 00:10:28.793 "write_zeroes": true, 00:10:28.793 "zcopy": true, 00:10:28.793 "get_zone_info": false, 00:10:28.793 "zone_management": false, 00:10:28.793 "zone_append": false, 00:10:28.793 "compare": false, 00:10:28.793 "compare_and_write": false, 00:10:28.793 "abort": true, 00:10:28.793 "seek_hole": false, 00:10:28.793 "seek_data": false, 00:10:28.793 "copy": true, 00:10:28.793 "nvme_iov_md": false 00:10:28.793 }, 00:10:28.793 "memory_domains": [ 00:10:28.793 { 00:10:28.793 "dma_device_id": "system", 00:10:28.793 "dma_device_type": 1 00:10:28.793 }, 00:10:28.793 { 00:10:28.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.793 "dma_device_type": 2 00:10:28.793 } 00:10:28.793 ], 00:10:28.793 "driver_specific": {} 00:10:28.793 } 00:10:28.793 ] 00:10:28.793 12:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.793 12:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:28.793 12:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:28.793 12:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.793 12:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.793 12:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:28.793 12:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.793 12:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.793 12:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.793 12:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.793 12:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.793 12:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.793 12:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.793 12:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.793 12:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.793 12:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.793 12:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.052 12:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.052 "name": "Existed_Raid", 00:10:29.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.052 "strip_size_kb": 64, 00:10:29.052 "state": "configuring", 00:10:29.052 "raid_level": "concat", 00:10:29.052 "superblock": false, 00:10:29.052 "num_base_bdevs": 4, 00:10:29.052 "num_base_bdevs_discovered": 3, 00:10:29.052 "num_base_bdevs_operational": 4, 00:10:29.052 "base_bdevs_list": [ 00:10:29.052 { 00:10:29.052 "name": "BaseBdev1", 00:10:29.052 "uuid": "cd9fc2f2-f59e-4724-9504-53e02ba022e1", 00:10:29.052 "is_configured": true, 00:10:29.052 "data_offset": 0, 00:10:29.052 "data_size": 65536 00:10:29.052 }, 00:10:29.052 { 00:10:29.052 "name": null, 00:10:29.052 "uuid": "5d0975d3-4b83-40cd-bfab-df4aa0957893", 00:10:29.052 "is_configured": false, 00:10:29.052 "data_offset": 0, 00:10:29.052 "data_size": 65536 00:10:29.052 }, 00:10:29.052 { 00:10:29.053 "name": "BaseBdev3", 00:10:29.053 "uuid": "52885e0f-7f64-436e-87b2-bd7de012553d", 00:10:29.053 "is_configured": true, 00:10:29.053 "data_offset": 0, 00:10:29.053 "data_size": 65536 00:10:29.053 }, 00:10:29.053 { 00:10:29.053 "name": "BaseBdev4", 00:10:29.053 "uuid": "6b69bb76-9157-4540-8f47-b98cd5b1be85", 00:10:29.053 "is_configured": true, 00:10:29.053 "data_offset": 0, 00:10:29.053 "data_size": 65536 00:10:29.053 } 00:10:29.053 ] 00:10:29.053 }' 00:10:29.053 12:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.053 12:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.312 12:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.312 12:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.312 12:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.312 12:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:29.313 12:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.313 12:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:29.313 12:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:29.313 12:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.313 12:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.313 [2024-11-19 12:30:34.538582] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:29.313 12:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.313 12:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:29.313 12:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.313 12:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.313 12:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:29.313 12:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.313 12:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.313 12:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.313 12:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.313 12:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.313 12:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.313 12:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.313 12:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.313 12:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.313 12:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.572 12:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.572 12:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.572 "name": "Existed_Raid", 00:10:29.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.572 "strip_size_kb": 64, 00:10:29.572 "state": "configuring", 00:10:29.572 "raid_level": "concat", 00:10:29.572 "superblock": false, 00:10:29.572 "num_base_bdevs": 4, 00:10:29.572 "num_base_bdevs_discovered": 2, 00:10:29.572 "num_base_bdevs_operational": 4, 00:10:29.572 "base_bdevs_list": [ 00:10:29.572 { 00:10:29.572 "name": "BaseBdev1", 00:10:29.572 "uuid": "cd9fc2f2-f59e-4724-9504-53e02ba022e1", 00:10:29.572 "is_configured": true, 00:10:29.572 "data_offset": 0, 00:10:29.572 "data_size": 65536 00:10:29.572 }, 00:10:29.572 { 00:10:29.572 "name": null, 00:10:29.572 "uuid": "5d0975d3-4b83-40cd-bfab-df4aa0957893", 00:10:29.572 "is_configured": false, 00:10:29.572 "data_offset": 0, 00:10:29.572 "data_size": 65536 00:10:29.572 }, 00:10:29.572 { 00:10:29.572 "name": null, 00:10:29.572 "uuid": "52885e0f-7f64-436e-87b2-bd7de012553d", 00:10:29.572 "is_configured": false, 00:10:29.572 "data_offset": 0, 00:10:29.572 "data_size": 65536 00:10:29.572 }, 00:10:29.572 { 00:10:29.572 "name": "BaseBdev4", 00:10:29.572 "uuid": "6b69bb76-9157-4540-8f47-b98cd5b1be85", 00:10:29.572 "is_configured": true, 00:10:29.572 "data_offset": 0, 00:10:29.572 "data_size": 65536 00:10:29.572 } 00:10:29.572 ] 00:10:29.572 }' 00:10:29.572 12:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.572 12:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.832 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.832 12:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.832 12:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.832 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:29.832 12:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.832 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:29.832 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:29.832 12:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.832 12:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.832 [2024-11-19 12:30:35.049773] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:29.832 12:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.832 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:29.832 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.832 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.832 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:29.832 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.832 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.832 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.832 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.832 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.832 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.832 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.832 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.832 12:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.832 12:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.832 12:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.091 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.091 "name": "Existed_Raid", 00:10:30.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.091 "strip_size_kb": 64, 00:10:30.091 "state": "configuring", 00:10:30.091 "raid_level": "concat", 00:10:30.091 "superblock": false, 00:10:30.091 "num_base_bdevs": 4, 00:10:30.091 "num_base_bdevs_discovered": 3, 00:10:30.091 "num_base_bdevs_operational": 4, 00:10:30.091 "base_bdevs_list": [ 00:10:30.091 { 00:10:30.091 "name": "BaseBdev1", 00:10:30.091 "uuid": "cd9fc2f2-f59e-4724-9504-53e02ba022e1", 00:10:30.091 "is_configured": true, 00:10:30.091 "data_offset": 0, 00:10:30.091 "data_size": 65536 00:10:30.091 }, 00:10:30.091 { 00:10:30.091 "name": null, 00:10:30.091 "uuid": "5d0975d3-4b83-40cd-bfab-df4aa0957893", 00:10:30.091 "is_configured": false, 00:10:30.091 "data_offset": 0, 00:10:30.091 "data_size": 65536 00:10:30.091 }, 00:10:30.091 { 00:10:30.091 "name": "BaseBdev3", 00:10:30.091 "uuid": "52885e0f-7f64-436e-87b2-bd7de012553d", 00:10:30.091 "is_configured": true, 00:10:30.091 "data_offset": 0, 00:10:30.091 "data_size": 65536 00:10:30.091 }, 00:10:30.091 { 00:10:30.091 "name": "BaseBdev4", 00:10:30.091 "uuid": "6b69bb76-9157-4540-8f47-b98cd5b1be85", 00:10:30.091 "is_configured": true, 00:10:30.091 "data_offset": 0, 00:10:30.091 "data_size": 65536 00:10:30.091 } 00:10:30.091 ] 00:10:30.091 }' 00:10:30.091 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.091 12:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.351 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.351 12:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.351 12:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.351 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:30.351 12:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.351 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:30.351 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:30.351 12:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.351 12:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.351 [2024-11-19 12:30:35.588876] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:30.351 12:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.351 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:30.351 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.351 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.351 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.351 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.351 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.351 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.351 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.351 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.351 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.351 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.351 12:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.351 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.351 12:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.609 12:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.609 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.609 "name": "Existed_Raid", 00:10:30.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.609 "strip_size_kb": 64, 00:10:30.609 "state": "configuring", 00:10:30.609 "raid_level": "concat", 00:10:30.609 "superblock": false, 00:10:30.609 "num_base_bdevs": 4, 00:10:30.609 "num_base_bdevs_discovered": 2, 00:10:30.609 "num_base_bdevs_operational": 4, 00:10:30.609 "base_bdevs_list": [ 00:10:30.609 { 00:10:30.609 "name": null, 00:10:30.609 "uuid": "cd9fc2f2-f59e-4724-9504-53e02ba022e1", 00:10:30.609 "is_configured": false, 00:10:30.609 "data_offset": 0, 00:10:30.609 "data_size": 65536 00:10:30.609 }, 00:10:30.609 { 00:10:30.609 "name": null, 00:10:30.609 "uuid": "5d0975d3-4b83-40cd-bfab-df4aa0957893", 00:10:30.609 "is_configured": false, 00:10:30.609 "data_offset": 0, 00:10:30.609 "data_size": 65536 00:10:30.609 }, 00:10:30.609 { 00:10:30.609 "name": "BaseBdev3", 00:10:30.609 "uuid": "52885e0f-7f64-436e-87b2-bd7de012553d", 00:10:30.609 "is_configured": true, 00:10:30.609 "data_offset": 0, 00:10:30.609 "data_size": 65536 00:10:30.609 }, 00:10:30.609 { 00:10:30.609 "name": "BaseBdev4", 00:10:30.609 "uuid": "6b69bb76-9157-4540-8f47-b98cd5b1be85", 00:10:30.609 "is_configured": true, 00:10:30.609 "data_offset": 0, 00:10:30.609 "data_size": 65536 00:10:30.609 } 00:10:30.609 ] 00:10:30.609 }' 00:10:30.609 12:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.609 12:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.868 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:30.868 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.868 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.868 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.868 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.868 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:30.868 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:30.868 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.868 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.868 [2024-11-19 12:30:36.082655] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:30.868 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.868 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:30.868 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.868 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.868 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.868 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.868 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.868 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.868 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.868 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.868 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.868 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.868 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.868 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.868 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.868 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.126 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.126 "name": "Existed_Raid", 00:10:31.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.126 "strip_size_kb": 64, 00:10:31.126 "state": "configuring", 00:10:31.126 "raid_level": "concat", 00:10:31.126 "superblock": false, 00:10:31.126 "num_base_bdevs": 4, 00:10:31.126 "num_base_bdevs_discovered": 3, 00:10:31.126 "num_base_bdevs_operational": 4, 00:10:31.126 "base_bdevs_list": [ 00:10:31.126 { 00:10:31.126 "name": null, 00:10:31.126 "uuid": "cd9fc2f2-f59e-4724-9504-53e02ba022e1", 00:10:31.126 "is_configured": false, 00:10:31.126 "data_offset": 0, 00:10:31.126 "data_size": 65536 00:10:31.126 }, 00:10:31.126 { 00:10:31.126 "name": "BaseBdev2", 00:10:31.126 "uuid": "5d0975d3-4b83-40cd-bfab-df4aa0957893", 00:10:31.126 "is_configured": true, 00:10:31.126 "data_offset": 0, 00:10:31.126 "data_size": 65536 00:10:31.126 }, 00:10:31.126 { 00:10:31.126 "name": "BaseBdev3", 00:10:31.126 "uuid": "52885e0f-7f64-436e-87b2-bd7de012553d", 00:10:31.126 "is_configured": true, 00:10:31.126 "data_offset": 0, 00:10:31.126 "data_size": 65536 00:10:31.126 }, 00:10:31.126 { 00:10:31.126 "name": "BaseBdev4", 00:10:31.126 "uuid": "6b69bb76-9157-4540-8f47-b98cd5b1be85", 00:10:31.126 "is_configured": true, 00:10:31.126 "data_offset": 0, 00:10:31.126 "data_size": 65536 00:10:31.126 } 00:10:31.126 ] 00:10:31.126 }' 00:10:31.126 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.126 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.385 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.385 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:31.385 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.385 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.385 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.385 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:31.385 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.385 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:31.385 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.385 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.385 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.385 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cd9fc2f2-f59e-4724-9504-53e02ba022e1 00:10:31.385 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.385 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.385 [2024-11-19 12:30:36.616832] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:31.385 [2024-11-19 12:30:36.616967] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:31.386 [2024-11-19 12:30:36.616991] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:31.386 [2024-11-19 12:30:36.617257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:31.386 [2024-11-19 12:30:36.617411] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:31.386 [2024-11-19 12:30:36.617453] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:31.386 [2024-11-19 12:30:36.617645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.386 NewBaseBdev 00:10:31.386 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.386 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:31.386 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:31.386 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:31.386 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:31.386 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:31.386 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:31.386 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:31.386 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.386 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.386 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.386 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:31.386 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.386 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.644 [ 00:10:31.644 { 00:10:31.644 "name": "NewBaseBdev", 00:10:31.644 "aliases": [ 00:10:31.644 "cd9fc2f2-f59e-4724-9504-53e02ba022e1" 00:10:31.644 ], 00:10:31.644 "product_name": "Malloc disk", 00:10:31.644 "block_size": 512, 00:10:31.644 "num_blocks": 65536, 00:10:31.644 "uuid": "cd9fc2f2-f59e-4724-9504-53e02ba022e1", 00:10:31.644 "assigned_rate_limits": { 00:10:31.644 "rw_ios_per_sec": 0, 00:10:31.644 "rw_mbytes_per_sec": 0, 00:10:31.644 "r_mbytes_per_sec": 0, 00:10:31.644 "w_mbytes_per_sec": 0 00:10:31.644 }, 00:10:31.644 "claimed": true, 00:10:31.644 "claim_type": "exclusive_write", 00:10:31.644 "zoned": false, 00:10:31.644 "supported_io_types": { 00:10:31.644 "read": true, 00:10:31.644 "write": true, 00:10:31.644 "unmap": true, 00:10:31.644 "flush": true, 00:10:31.645 "reset": true, 00:10:31.645 "nvme_admin": false, 00:10:31.645 "nvme_io": false, 00:10:31.645 "nvme_io_md": false, 00:10:31.645 "write_zeroes": true, 00:10:31.645 "zcopy": true, 00:10:31.645 "get_zone_info": false, 00:10:31.645 "zone_management": false, 00:10:31.645 "zone_append": false, 00:10:31.645 "compare": false, 00:10:31.645 "compare_and_write": false, 00:10:31.645 "abort": true, 00:10:31.645 "seek_hole": false, 00:10:31.645 "seek_data": false, 00:10:31.645 "copy": true, 00:10:31.645 "nvme_iov_md": false 00:10:31.645 }, 00:10:31.645 "memory_domains": [ 00:10:31.645 { 00:10:31.645 "dma_device_id": "system", 00:10:31.645 "dma_device_type": 1 00:10:31.645 }, 00:10:31.645 { 00:10:31.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.645 "dma_device_type": 2 00:10:31.645 } 00:10:31.645 ], 00:10:31.645 "driver_specific": {} 00:10:31.645 } 00:10:31.645 ] 00:10:31.645 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.645 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:31.645 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:31.645 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.645 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.645 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:31.645 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.645 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.645 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.645 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.645 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.645 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.645 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.645 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.645 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.645 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.645 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.645 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.645 "name": "Existed_Raid", 00:10:31.645 "uuid": "f0127b0b-7072-416f-962e-926de074a9a1", 00:10:31.645 "strip_size_kb": 64, 00:10:31.645 "state": "online", 00:10:31.645 "raid_level": "concat", 00:10:31.645 "superblock": false, 00:10:31.645 "num_base_bdevs": 4, 00:10:31.645 "num_base_bdevs_discovered": 4, 00:10:31.645 "num_base_bdevs_operational": 4, 00:10:31.645 "base_bdevs_list": [ 00:10:31.645 { 00:10:31.645 "name": "NewBaseBdev", 00:10:31.645 "uuid": "cd9fc2f2-f59e-4724-9504-53e02ba022e1", 00:10:31.645 "is_configured": true, 00:10:31.645 "data_offset": 0, 00:10:31.645 "data_size": 65536 00:10:31.645 }, 00:10:31.645 { 00:10:31.645 "name": "BaseBdev2", 00:10:31.645 "uuid": "5d0975d3-4b83-40cd-bfab-df4aa0957893", 00:10:31.645 "is_configured": true, 00:10:31.645 "data_offset": 0, 00:10:31.645 "data_size": 65536 00:10:31.645 }, 00:10:31.645 { 00:10:31.645 "name": "BaseBdev3", 00:10:31.645 "uuid": "52885e0f-7f64-436e-87b2-bd7de012553d", 00:10:31.645 "is_configured": true, 00:10:31.645 "data_offset": 0, 00:10:31.645 "data_size": 65536 00:10:31.645 }, 00:10:31.645 { 00:10:31.645 "name": "BaseBdev4", 00:10:31.645 "uuid": "6b69bb76-9157-4540-8f47-b98cd5b1be85", 00:10:31.645 "is_configured": true, 00:10:31.645 "data_offset": 0, 00:10:31.645 "data_size": 65536 00:10:31.645 } 00:10:31.645 ] 00:10:31.645 }' 00:10:31.645 12:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.645 12:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.903 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:31.904 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:31.904 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:31.904 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:31.904 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:31.904 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:31.904 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:31.904 12:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.904 12:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.904 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:31.904 [2024-11-19 12:30:37.124333] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:31.904 12:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.162 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:32.162 "name": "Existed_Raid", 00:10:32.162 "aliases": [ 00:10:32.162 "f0127b0b-7072-416f-962e-926de074a9a1" 00:10:32.162 ], 00:10:32.162 "product_name": "Raid Volume", 00:10:32.162 "block_size": 512, 00:10:32.162 "num_blocks": 262144, 00:10:32.162 "uuid": "f0127b0b-7072-416f-962e-926de074a9a1", 00:10:32.162 "assigned_rate_limits": { 00:10:32.162 "rw_ios_per_sec": 0, 00:10:32.162 "rw_mbytes_per_sec": 0, 00:10:32.162 "r_mbytes_per_sec": 0, 00:10:32.162 "w_mbytes_per_sec": 0 00:10:32.162 }, 00:10:32.162 "claimed": false, 00:10:32.162 "zoned": false, 00:10:32.162 "supported_io_types": { 00:10:32.162 "read": true, 00:10:32.162 "write": true, 00:10:32.162 "unmap": true, 00:10:32.162 "flush": true, 00:10:32.162 "reset": true, 00:10:32.162 "nvme_admin": false, 00:10:32.162 "nvme_io": false, 00:10:32.162 "nvme_io_md": false, 00:10:32.162 "write_zeroes": true, 00:10:32.162 "zcopy": false, 00:10:32.162 "get_zone_info": false, 00:10:32.162 "zone_management": false, 00:10:32.162 "zone_append": false, 00:10:32.162 "compare": false, 00:10:32.162 "compare_and_write": false, 00:10:32.162 "abort": false, 00:10:32.162 "seek_hole": false, 00:10:32.162 "seek_data": false, 00:10:32.162 "copy": false, 00:10:32.162 "nvme_iov_md": false 00:10:32.163 }, 00:10:32.163 "memory_domains": [ 00:10:32.163 { 00:10:32.163 "dma_device_id": "system", 00:10:32.163 "dma_device_type": 1 00:10:32.163 }, 00:10:32.163 { 00:10:32.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.163 "dma_device_type": 2 00:10:32.163 }, 00:10:32.163 { 00:10:32.163 "dma_device_id": "system", 00:10:32.163 "dma_device_type": 1 00:10:32.163 }, 00:10:32.163 { 00:10:32.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.163 "dma_device_type": 2 00:10:32.163 }, 00:10:32.163 { 00:10:32.163 "dma_device_id": "system", 00:10:32.163 "dma_device_type": 1 00:10:32.163 }, 00:10:32.163 { 00:10:32.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.163 "dma_device_type": 2 00:10:32.163 }, 00:10:32.163 { 00:10:32.163 "dma_device_id": "system", 00:10:32.163 "dma_device_type": 1 00:10:32.163 }, 00:10:32.163 { 00:10:32.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.163 "dma_device_type": 2 00:10:32.163 } 00:10:32.163 ], 00:10:32.163 "driver_specific": { 00:10:32.163 "raid": { 00:10:32.163 "uuid": "f0127b0b-7072-416f-962e-926de074a9a1", 00:10:32.163 "strip_size_kb": 64, 00:10:32.163 "state": "online", 00:10:32.163 "raid_level": "concat", 00:10:32.163 "superblock": false, 00:10:32.163 "num_base_bdevs": 4, 00:10:32.163 "num_base_bdevs_discovered": 4, 00:10:32.163 "num_base_bdevs_operational": 4, 00:10:32.163 "base_bdevs_list": [ 00:10:32.163 { 00:10:32.163 "name": "NewBaseBdev", 00:10:32.163 "uuid": "cd9fc2f2-f59e-4724-9504-53e02ba022e1", 00:10:32.163 "is_configured": true, 00:10:32.163 "data_offset": 0, 00:10:32.163 "data_size": 65536 00:10:32.163 }, 00:10:32.163 { 00:10:32.163 "name": "BaseBdev2", 00:10:32.163 "uuid": "5d0975d3-4b83-40cd-bfab-df4aa0957893", 00:10:32.163 "is_configured": true, 00:10:32.163 "data_offset": 0, 00:10:32.163 "data_size": 65536 00:10:32.163 }, 00:10:32.163 { 00:10:32.163 "name": "BaseBdev3", 00:10:32.163 "uuid": "52885e0f-7f64-436e-87b2-bd7de012553d", 00:10:32.163 "is_configured": true, 00:10:32.163 "data_offset": 0, 00:10:32.163 "data_size": 65536 00:10:32.163 }, 00:10:32.163 { 00:10:32.163 "name": "BaseBdev4", 00:10:32.163 "uuid": "6b69bb76-9157-4540-8f47-b98cd5b1be85", 00:10:32.163 "is_configured": true, 00:10:32.163 "data_offset": 0, 00:10:32.163 "data_size": 65536 00:10:32.163 } 00:10:32.163 ] 00:10:32.163 } 00:10:32.163 } 00:10:32.163 }' 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:32.163 BaseBdev2 00:10:32.163 BaseBdev3 00:10:32.163 BaseBdev4' 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.163 12:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.423 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.423 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.423 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:32.423 12:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.423 12:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.423 [2024-11-19 12:30:37.431466] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:32.423 [2024-11-19 12:30:37.431498] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:32.423 [2024-11-19 12:30:37.431570] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.423 [2024-11-19 12:30:37.431640] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:32.423 [2024-11-19 12:30:37.431650] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:32.423 12:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.423 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82351 00:10:32.423 12:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 82351 ']' 00:10:32.423 12:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 82351 00:10:32.423 12:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:32.423 12:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:32.423 12:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82351 00:10:32.423 12:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:32.423 12:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:32.423 12:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82351' 00:10:32.423 killing process with pid 82351 00:10:32.423 12:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 82351 00:10:32.423 [2024-11-19 12:30:37.484734] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:32.423 12:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 82351 00:10:32.423 [2024-11-19 12:30:37.526205] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:32.682 00:10:32.682 real 0m9.782s 00:10:32.682 user 0m16.547s 00:10:32.682 sys 0m2.163s 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:32.682 ************************************ 00:10:32.682 END TEST raid_state_function_test 00:10:32.682 ************************************ 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.682 12:30:37 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:32.682 12:30:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:32.682 12:30:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:32.682 12:30:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:32.682 ************************************ 00:10:32.682 START TEST raid_state_function_test_sb 00:10:32.682 ************************************ 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83006 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:32.682 Process raid pid: 83006 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83006' 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83006 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 83006 ']' 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:32.682 12:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.943 [2024-11-19 12:30:37.952692] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:32.943 [2024-11-19 12:30:37.952851] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.943 [2024-11-19 12:30:38.118880] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.943 [2024-11-19 12:30:38.167857] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.202 [2024-11-19 12:30:38.210297] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.202 [2024-11-19 12:30:38.210341] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.769 12:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:33.769 12:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:33.769 12:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:33.769 12:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.769 12:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.769 [2024-11-19 12:30:38.792165] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:33.769 [2024-11-19 12:30:38.792224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:33.769 [2024-11-19 12:30:38.792243] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:33.770 [2024-11-19 12:30:38.792254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:33.770 [2024-11-19 12:30:38.792260] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:33.770 [2024-11-19 12:30:38.792271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:33.770 [2024-11-19 12:30:38.792277] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:33.770 [2024-11-19 12:30:38.792287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:33.770 12:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.770 12:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:33.770 12:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.770 12:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.770 12:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:33.770 12:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.770 12:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.770 12:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.770 12:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.770 12:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.770 12:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.770 12:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.770 12:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.770 12:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.770 12:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.770 12:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.770 12:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.770 "name": "Existed_Raid", 00:10:33.770 "uuid": "606a76b7-181d-479c-a085-8368114e05aa", 00:10:33.770 "strip_size_kb": 64, 00:10:33.770 "state": "configuring", 00:10:33.770 "raid_level": "concat", 00:10:33.770 "superblock": true, 00:10:33.770 "num_base_bdevs": 4, 00:10:33.770 "num_base_bdevs_discovered": 0, 00:10:33.770 "num_base_bdevs_operational": 4, 00:10:33.770 "base_bdevs_list": [ 00:10:33.770 { 00:10:33.770 "name": "BaseBdev1", 00:10:33.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.770 "is_configured": false, 00:10:33.770 "data_offset": 0, 00:10:33.770 "data_size": 0 00:10:33.770 }, 00:10:33.770 { 00:10:33.770 "name": "BaseBdev2", 00:10:33.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.770 "is_configured": false, 00:10:33.770 "data_offset": 0, 00:10:33.770 "data_size": 0 00:10:33.770 }, 00:10:33.770 { 00:10:33.770 "name": "BaseBdev3", 00:10:33.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.770 "is_configured": false, 00:10:33.770 "data_offset": 0, 00:10:33.770 "data_size": 0 00:10:33.770 }, 00:10:33.770 { 00:10:33.770 "name": "BaseBdev4", 00:10:33.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.770 "is_configured": false, 00:10:33.770 "data_offset": 0, 00:10:33.770 "data_size": 0 00:10:33.770 } 00:10:33.770 ] 00:10:33.770 }' 00:10:33.770 12:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.770 12:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.029 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:34.029 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.029 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.029 [2024-11-19 12:30:39.227335] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:34.029 [2024-11-19 12:30:39.227479] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:34.029 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.029 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:34.029 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.029 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.029 [2024-11-19 12:30:39.239357] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:34.029 [2024-11-19 12:30:39.239453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:34.029 [2024-11-19 12:30:39.239480] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:34.029 [2024-11-19 12:30:39.239503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:34.029 [2024-11-19 12:30:39.239521] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:34.029 [2024-11-19 12:30:39.239541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:34.029 [2024-11-19 12:30:39.239559] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:34.029 [2024-11-19 12:30:39.239579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:34.029 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.029 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:34.029 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.029 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.029 [2024-11-19 12:30:39.260315] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:34.029 BaseBdev1 00:10:34.029 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.029 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:34.029 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:34.029 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:34.029 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:34.029 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:34.029 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:34.029 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:34.029 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.029 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.029 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.029 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:34.029 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.029 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.289 [ 00:10:34.289 { 00:10:34.289 "name": "BaseBdev1", 00:10:34.289 "aliases": [ 00:10:34.289 "0404fb21-2d84-4d50-a21d-d41ebb2a3817" 00:10:34.289 ], 00:10:34.289 "product_name": "Malloc disk", 00:10:34.289 "block_size": 512, 00:10:34.289 "num_blocks": 65536, 00:10:34.289 "uuid": "0404fb21-2d84-4d50-a21d-d41ebb2a3817", 00:10:34.289 "assigned_rate_limits": { 00:10:34.289 "rw_ios_per_sec": 0, 00:10:34.289 "rw_mbytes_per_sec": 0, 00:10:34.289 "r_mbytes_per_sec": 0, 00:10:34.289 "w_mbytes_per_sec": 0 00:10:34.289 }, 00:10:34.289 "claimed": true, 00:10:34.289 "claim_type": "exclusive_write", 00:10:34.289 "zoned": false, 00:10:34.289 "supported_io_types": { 00:10:34.289 "read": true, 00:10:34.289 "write": true, 00:10:34.289 "unmap": true, 00:10:34.289 "flush": true, 00:10:34.289 "reset": true, 00:10:34.289 "nvme_admin": false, 00:10:34.289 "nvme_io": false, 00:10:34.289 "nvme_io_md": false, 00:10:34.289 "write_zeroes": true, 00:10:34.289 "zcopy": true, 00:10:34.289 "get_zone_info": false, 00:10:34.289 "zone_management": false, 00:10:34.289 "zone_append": false, 00:10:34.289 "compare": false, 00:10:34.289 "compare_and_write": false, 00:10:34.289 "abort": true, 00:10:34.289 "seek_hole": false, 00:10:34.289 "seek_data": false, 00:10:34.289 "copy": true, 00:10:34.289 "nvme_iov_md": false 00:10:34.289 }, 00:10:34.289 "memory_domains": [ 00:10:34.289 { 00:10:34.289 "dma_device_id": "system", 00:10:34.289 "dma_device_type": 1 00:10:34.289 }, 00:10:34.289 { 00:10:34.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.289 "dma_device_type": 2 00:10:34.289 } 00:10:34.289 ], 00:10:34.289 "driver_specific": {} 00:10:34.289 } 00:10:34.289 ] 00:10:34.289 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.289 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:34.289 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:34.289 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.289 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.289 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:34.289 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.289 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.289 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.289 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.289 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.289 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.289 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.289 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.289 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.289 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.289 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.289 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.289 "name": "Existed_Raid", 00:10:34.289 "uuid": "199b62b4-8b19-4c14-a596-92efaf23e8ca", 00:10:34.289 "strip_size_kb": 64, 00:10:34.289 "state": "configuring", 00:10:34.289 "raid_level": "concat", 00:10:34.289 "superblock": true, 00:10:34.289 "num_base_bdevs": 4, 00:10:34.289 "num_base_bdevs_discovered": 1, 00:10:34.289 "num_base_bdevs_operational": 4, 00:10:34.289 "base_bdevs_list": [ 00:10:34.289 { 00:10:34.289 "name": "BaseBdev1", 00:10:34.289 "uuid": "0404fb21-2d84-4d50-a21d-d41ebb2a3817", 00:10:34.289 "is_configured": true, 00:10:34.289 "data_offset": 2048, 00:10:34.289 "data_size": 63488 00:10:34.289 }, 00:10:34.289 { 00:10:34.289 "name": "BaseBdev2", 00:10:34.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.289 "is_configured": false, 00:10:34.289 "data_offset": 0, 00:10:34.289 "data_size": 0 00:10:34.289 }, 00:10:34.289 { 00:10:34.289 "name": "BaseBdev3", 00:10:34.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.289 "is_configured": false, 00:10:34.289 "data_offset": 0, 00:10:34.289 "data_size": 0 00:10:34.289 }, 00:10:34.289 { 00:10:34.289 "name": "BaseBdev4", 00:10:34.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.289 "is_configured": false, 00:10:34.289 "data_offset": 0, 00:10:34.289 "data_size": 0 00:10:34.289 } 00:10:34.289 ] 00:10:34.289 }' 00:10:34.289 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.289 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.548 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:34.548 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.548 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.548 [2024-11-19 12:30:39.695611] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:34.548 [2024-11-19 12:30:39.695729] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:34.548 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.548 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:34.548 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.548 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.548 [2024-11-19 12:30:39.707632] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:34.548 [2024-11-19 12:30:39.709538] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:34.548 [2024-11-19 12:30:39.709610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:34.548 [2024-11-19 12:30:39.709636] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:34.548 [2024-11-19 12:30:39.709656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:34.548 [2024-11-19 12:30:39.709673] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:34.548 [2024-11-19 12:30:39.709691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:34.548 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.549 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:34.549 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:34.549 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:34.549 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.549 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.549 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:34.549 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.549 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.549 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.549 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.549 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.549 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.549 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.549 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.549 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.549 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.549 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.549 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.549 "name": "Existed_Raid", 00:10:34.549 "uuid": "60775da0-bdfc-485e-a165-a3173843ffa1", 00:10:34.549 "strip_size_kb": 64, 00:10:34.549 "state": "configuring", 00:10:34.549 "raid_level": "concat", 00:10:34.549 "superblock": true, 00:10:34.549 "num_base_bdevs": 4, 00:10:34.549 "num_base_bdevs_discovered": 1, 00:10:34.549 "num_base_bdevs_operational": 4, 00:10:34.549 "base_bdevs_list": [ 00:10:34.549 { 00:10:34.549 "name": "BaseBdev1", 00:10:34.549 "uuid": "0404fb21-2d84-4d50-a21d-d41ebb2a3817", 00:10:34.549 "is_configured": true, 00:10:34.549 "data_offset": 2048, 00:10:34.549 "data_size": 63488 00:10:34.549 }, 00:10:34.549 { 00:10:34.549 "name": "BaseBdev2", 00:10:34.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.549 "is_configured": false, 00:10:34.549 "data_offset": 0, 00:10:34.549 "data_size": 0 00:10:34.549 }, 00:10:34.549 { 00:10:34.549 "name": "BaseBdev3", 00:10:34.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.549 "is_configured": false, 00:10:34.549 "data_offset": 0, 00:10:34.549 "data_size": 0 00:10:34.549 }, 00:10:34.549 { 00:10:34.549 "name": "BaseBdev4", 00:10:34.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.549 "is_configured": false, 00:10:34.549 "data_offset": 0, 00:10:34.549 "data_size": 0 00:10:34.549 } 00:10:34.549 ] 00:10:34.549 }' 00:10:34.549 12:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.549 12:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.118 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:35.118 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.118 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.118 [2024-11-19 12:30:40.134074] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:35.118 BaseBdev2 00:10:35.118 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.118 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:35.118 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:35.118 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:35.118 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:35.118 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:35.118 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:35.118 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:35.118 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.118 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.118 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.118 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:35.118 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.118 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.118 [ 00:10:35.118 { 00:10:35.118 "name": "BaseBdev2", 00:10:35.118 "aliases": [ 00:10:35.118 "2dd4e1e3-796d-4179-885c-ef53c86dd32c" 00:10:35.118 ], 00:10:35.118 "product_name": "Malloc disk", 00:10:35.118 "block_size": 512, 00:10:35.118 "num_blocks": 65536, 00:10:35.118 "uuid": "2dd4e1e3-796d-4179-885c-ef53c86dd32c", 00:10:35.118 "assigned_rate_limits": { 00:10:35.118 "rw_ios_per_sec": 0, 00:10:35.118 "rw_mbytes_per_sec": 0, 00:10:35.118 "r_mbytes_per_sec": 0, 00:10:35.119 "w_mbytes_per_sec": 0 00:10:35.119 }, 00:10:35.119 "claimed": true, 00:10:35.119 "claim_type": "exclusive_write", 00:10:35.119 "zoned": false, 00:10:35.119 "supported_io_types": { 00:10:35.119 "read": true, 00:10:35.119 "write": true, 00:10:35.119 "unmap": true, 00:10:35.119 "flush": true, 00:10:35.119 "reset": true, 00:10:35.119 "nvme_admin": false, 00:10:35.119 "nvme_io": false, 00:10:35.119 "nvme_io_md": false, 00:10:35.119 "write_zeroes": true, 00:10:35.119 "zcopy": true, 00:10:35.119 "get_zone_info": false, 00:10:35.119 "zone_management": false, 00:10:35.119 "zone_append": false, 00:10:35.119 "compare": false, 00:10:35.119 "compare_and_write": false, 00:10:35.119 "abort": true, 00:10:35.119 "seek_hole": false, 00:10:35.119 "seek_data": false, 00:10:35.119 "copy": true, 00:10:35.119 "nvme_iov_md": false 00:10:35.119 }, 00:10:35.119 "memory_domains": [ 00:10:35.119 { 00:10:35.119 "dma_device_id": "system", 00:10:35.119 "dma_device_type": 1 00:10:35.119 }, 00:10:35.119 { 00:10:35.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.119 "dma_device_type": 2 00:10:35.119 } 00:10:35.119 ], 00:10:35.119 "driver_specific": {} 00:10:35.119 } 00:10:35.119 ] 00:10:35.119 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.119 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:35.119 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:35.119 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:35.119 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:35.119 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.119 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.119 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:35.119 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.119 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.119 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.119 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.119 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.119 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.119 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.119 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.119 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.119 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.119 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.119 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.119 "name": "Existed_Raid", 00:10:35.119 "uuid": "60775da0-bdfc-485e-a165-a3173843ffa1", 00:10:35.119 "strip_size_kb": 64, 00:10:35.119 "state": "configuring", 00:10:35.119 "raid_level": "concat", 00:10:35.119 "superblock": true, 00:10:35.119 "num_base_bdevs": 4, 00:10:35.119 "num_base_bdevs_discovered": 2, 00:10:35.119 "num_base_bdevs_operational": 4, 00:10:35.119 "base_bdevs_list": [ 00:10:35.119 { 00:10:35.119 "name": "BaseBdev1", 00:10:35.119 "uuid": "0404fb21-2d84-4d50-a21d-d41ebb2a3817", 00:10:35.119 "is_configured": true, 00:10:35.119 "data_offset": 2048, 00:10:35.119 "data_size": 63488 00:10:35.119 }, 00:10:35.119 { 00:10:35.119 "name": "BaseBdev2", 00:10:35.119 "uuid": "2dd4e1e3-796d-4179-885c-ef53c86dd32c", 00:10:35.119 "is_configured": true, 00:10:35.119 "data_offset": 2048, 00:10:35.119 "data_size": 63488 00:10:35.119 }, 00:10:35.119 { 00:10:35.119 "name": "BaseBdev3", 00:10:35.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.119 "is_configured": false, 00:10:35.119 "data_offset": 0, 00:10:35.119 "data_size": 0 00:10:35.119 }, 00:10:35.119 { 00:10:35.119 "name": "BaseBdev4", 00:10:35.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.119 "is_configured": false, 00:10:35.119 "data_offset": 0, 00:10:35.119 "data_size": 0 00:10:35.119 } 00:10:35.119 ] 00:10:35.119 }' 00:10:35.119 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.119 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.379 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:35.379 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.379 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.379 [2024-11-19 12:30:40.620460] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:35.379 BaseBdev3 00:10:35.379 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.379 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:35.379 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:35.379 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:35.379 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:35.379 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:35.379 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:35.379 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:35.379 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.379 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.639 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.639 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:35.639 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.639 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.639 [ 00:10:35.639 { 00:10:35.639 "name": "BaseBdev3", 00:10:35.639 "aliases": [ 00:10:35.639 "2451cf36-0746-4ad9-9ccc-45aeff40915e" 00:10:35.639 ], 00:10:35.639 "product_name": "Malloc disk", 00:10:35.639 "block_size": 512, 00:10:35.639 "num_blocks": 65536, 00:10:35.639 "uuid": "2451cf36-0746-4ad9-9ccc-45aeff40915e", 00:10:35.639 "assigned_rate_limits": { 00:10:35.639 "rw_ios_per_sec": 0, 00:10:35.639 "rw_mbytes_per_sec": 0, 00:10:35.639 "r_mbytes_per_sec": 0, 00:10:35.639 "w_mbytes_per_sec": 0 00:10:35.639 }, 00:10:35.639 "claimed": true, 00:10:35.639 "claim_type": "exclusive_write", 00:10:35.639 "zoned": false, 00:10:35.639 "supported_io_types": { 00:10:35.639 "read": true, 00:10:35.639 "write": true, 00:10:35.639 "unmap": true, 00:10:35.639 "flush": true, 00:10:35.639 "reset": true, 00:10:35.639 "nvme_admin": false, 00:10:35.639 "nvme_io": false, 00:10:35.639 "nvme_io_md": false, 00:10:35.639 "write_zeroes": true, 00:10:35.639 "zcopy": true, 00:10:35.639 "get_zone_info": false, 00:10:35.639 "zone_management": false, 00:10:35.639 "zone_append": false, 00:10:35.639 "compare": false, 00:10:35.639 "compare_and_write": false, 00:10:35.639 "abort": true, 00:10:35.639 "seek_hole": false, 00:10:35.639 "seek_data": false, 00:10:35.639 "copy": true, 00:10:35.639 "nvme_iov_md": false 00:10:35.639 }, 00:10:35.639 "memory_domains": [ 00:10:35.639 { 00:10:35.639 "dma_device_id": "system", 00:10:35.639 "dma_device_type": 1 00:10:35.639 }, 00:10:35.639 { 00:10:35.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.639 "dma_device_type": 2 00:10:35.639 } 00:10:35.639 ], 00:10:35.639 "driver_specific": {} 00:10:35.639 } 00:10:35.639 ] 00:10:35.639 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.639 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:35.639 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:35.639 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:35.639 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:35.639 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.639 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.639 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:35.639 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.639 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.639 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.639 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.639 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.639 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.639 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.639 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.639 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.639 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.639 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.639 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.639 "name": "Existed_Raid", 00:10:35.639 "uuid": "60775da0-bdfc-485e-a165-a3173843ffa1", 00:10:35.639 "strip_size_kb": 64, 00:10:35.639 "state": "configuring", 00:10:35.639 "raid_level": "concat", 00:10:35.639 "superblock": true, 00:10:35.639 "num_base_bdevs": 4, 00:10:35.639 "num_base_bdevs_discovered": 3, 00:10:35.639 "num_base_bdevs_operational": 4, 00:10:35.639 "base_bdevs_list": [ 00:10:35.639 { 00:10:35.639 "name": "BaseBdev1", 00:10:35.639 "uuid": "0404fb21-2d84-4d50-a21d-d41ebb2a3817", 00:10:35.639 "is_configured": true, 00:10:35.639 "data_offset": 2048, 00:10:35.639 "data_size": 63488 00:10:35.639 }, 00:10:35.639 { 00:10:35.639 "name": "BaseBdev2", 00:10:35.639 "uuid": "2dd4e1e3-796d-4179-885c-ef53c86dd32c", 00:10:35.639 "is_configured": true, 00:10:35.639 "data_offset": 2048, 00:10:35.639 "data_size": 63488 00:10:35.639 }, 00:10:35.639 { 00:10:35.639 "name": "BaseBdev3", 00:10:35.639 "uuid": "2451cf36-0746-4ad9-9ccc-45aeff40915e", 00:10:35.639 "is_configured": true, 00:10:35.639 "data_offset": 2048, 00:10:35.639 "data_size": 63488 00:10:35.639 }, 00:10:35.639 { 00:10:35.639 "name": "BaseBdev4", 00:10:35.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.639 "is_configured": false, 00:10:35.639 "data_offset": 0, 00:10:35.639 "data_size": 0 00:10:35.639 } 00:10:35.639 ] 00:10:35.639 }' 00:10:35.639 12:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.639 12:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.899 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:35.899 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.899 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.899 [2024-11-19 12:30:41.118791] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:35.899 [2024-11-19 12:30:41.119122] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:35.899 [2024-11-19 12:30:41.119175] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:35.899 BaseBdev4 00:10:35.899 [2024-11-19 12:30:41.119478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:35.899 [2024-11-19 12:30:41.119624] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:35.899 [2024-11-19 12:30:41.119637] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:35.899 [2024-11-19 12:30:41.119765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.899 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.899 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:35.899 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:35.899 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:35.899 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:35.899 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:35.899 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:35.899 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:35.899 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.899 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.899 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.899 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:35.899 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.899 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.899 [ 00:10:35.899 { 00:10:35.899 "name": "BaseBdev4", 00:10:35.899 "aliases": [ 00:10:35.899 "ad360f41-b2d7-4717-89fc-f126a5506b59" 00:10:35.899 ], 00:10:35.899 "product_name": "Malloc disk", 00:10:35.899 "block_size": 512, 00:10:35.899 "num_blocks": 65536, 00:10:35.899 "uuid": "ad360f41-b2d7-4717-89fc-f126a5506b59", 00:10:35.899 "assigned_rate_limits": { 00:10:35.899 "rw_ios_per_sec": 0, 00:10:35.899 "rw_mbytes_per_sec": 0, 00:10:35.899 "r_mbytes_per_sec": 0, 00:10:35.899 "w_mbytes_per_sec": 0 00:10:35.899 }, 00:10:35.899 "claimed": true, 00:10:35.899 "claim_type": "exclusive_write", 00:10:35.899 "zoned": false, 00:10:35.899 "supported_io_types": { 00:10:35.899 "read": true, 00:10:35.899 "write": true, 00:10:35.899 "unmap": true, 00:10:35.899 "flush": true, 00:10:35.899 "reset": true, 00:10:35.899 "nvme_admin": false, 00:10:35.899 "nvme_io": false, 00:10:35.899 "nvme_io_md": false, 00:10:35.899 "write_zeroes": true, 00:10:35.899 "zcopy": true, 00:10:35.899 "get_zone_info": false, 00:10:35.899 "zone_management": false, 00:10:35.899 "zone_append": false, 00:10:35.899 "compare": false, 00:10:35.899 "compare_and_write": false, 00:10:35.899 "abort": true, 00:10:35.899 "seek_hole": false, 00:10:35.899 "seek_data": false, 00:10:35.899 "copy": true, 00:10:35.899 "nvme_iov_md": false 00:10:35.899 }, 00:10:35.899 "memory_domains": [ 00:10:35.899 { 00:10:35.899 "dma_device_id": "system", 00:10:35.899 "dma_device_type": 1 00:10:35.899 }, 00:10:35.899 { 00:10:35.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.899 "dma_device_type": 2 00:10:35.899 } 00:10:35.899 ], 00:10:35.899 "driver_specific": {} 00:10:35.899 } 00:10:35.899 ] 00:10:35.900 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.900 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:35.900 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:35.900 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:35.900 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:35.900 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.900 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:35.900 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.159 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.159 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.159 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.159 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.159 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.159 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.159 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.159 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.159 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.159 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.159 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.159 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.159 "name": "Existed_Raid", 00:10:36.159 "uuid": "60775da0-bdfc-485e-a165-a3173843ffa1", 00:10:36.159 "strip_size_kb": 64, 00:10:36.159 "state": "online", 00:10:36.159 "raid_level": "concat", 00:10:36.159 "superblock": true, 00:10:36.159 "num_base_bdevs": 4, 00:10:36.159 "num_base_bdevs_discovered": 4, 00:10:36.159 "num_base_bdevs_operational": 4, 00:10:36.159 "base_bdevs_list": [ 00:10:36.159 { 00:10:36.159 "name": "BaseBdev1", 00:10:36.159 "uuid": "0404fb21-2d84-4d50-a21d-d41ebb2a3817", 00:10:36.159 "is_configured": true, 00:10:36.159 "data_offset": 2048, 00:10:36.159 "data_size": 63488 00:10:36.159 }, 00:10:36.159 { 00:10:36.159 "name": "BaseBdev2", 00:10:36.159 "uuid": "2dd4e1e3-796d-4179-885c-ef53c86dd32c", 00:10:36.159 "is_configured": true, 00:10:36.159 "data_offset": 2048, 00:10:36.159 "data_size": 63488 00:10:36.159 }, 00:10:36.159 { 00:10:36.159 "name": "BaseBdev3", 00:10:36.159 "uuid": "2451cf36-0746-4ad9-9ccc-45aeff40915e", 00:10:36.159 "is_configured": true, 00:10:36.159 "data_offset": 2048, 00:10:36.159 "data_size": 63488 00:10:36.159 }, 00:10:36.159 { 00:10:36.159 "name": "BaseBdev4", 00:10:36.159 "uuid": "ad360f41-b2d7-4717-89fc-f126a5506b59", 00:10:36.159 "is_configured": true, 00:10:36.159 "data_offset": 2048, 00:10:36.159 "data_size": 63488 00:10:36.159 } 00:10:36.159 ] 00:10:36.159 }' 00:10:36.159 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.159 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.418 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:36.418 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:36.418 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:36.418 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:36.418 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:36.418 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:36.418 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:36.418 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:36.418 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.418 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.418 [2024-11-19 12:30:41.646287] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:36.418 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.677 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:36.677 "name": "Existed_Raid", 00:10:36.677 "aliases": [ 00:10:36.677 "60775da0-bdfc-485e-a165-a3173843ffa1" 00:10:36.677 ], 00:10:36.677 "product_name": "Raid Volume", 00:10:36.677 "block_size": 512, 00:10:36.677 "num_blocks": 253952, 00:10:36.677 "uuid": "60775da0-bdfc-485e-a165-a3173843ffa1", 00:10:36.677 "assigned_rate_limits": { 00:10:36.677 "rw_ios_per_sec": 0, 00:10:36.677 "rw_mbytes_per_sec": 0, 00:10:36.677 "r_mbytes_per_sec": 0, 00:10:36.677 "w_mbytes_per_sec": 0 00:10:36.677 }, 00:10:36.677 "claimed": false, 00:10:36.677 "zoned": false, 00:10:36.677 "supported_io_types": { 00:10:36.677 "read": true, 00:10:36.677 "write": true, 00:10:36.677 "unmap": true, 00:10:36.677 "flush": true, 00:10:36.677 "reset": true, 00:10:36.677 "nvme_admin": false, 00:10:36.677 "nvme_io": false, 00:10:36.677 "nvme_io_md": false, 00:10:36.677 "write_zeroes": true, 00:10:36.677 "zcopy": false, 00:10:36.677 "get_zone_info": false, 00:10:36.677 "zone_management": false, 00:10:36.677 "zone_append": false, 00:10:36.677 "compare": false, 00:10:36.677 "compare_and_write": false, 00:10:36.677 "abort": false, 00:10:36.677 "seek_hole": false, 00:10:36.677 "seek_data": false, 00:10:36.677 "copy": false, 00:10:36.677 "nvme_iov_md": false 00:10:36.677 }, 00:10:36.677 "memory_domains": [ 00:10:36.677 { 00:10:36.677 "dma_device_id": "system", 00:10:36.677 "dma_device_type": 1 00:10:36.677 }, 00:10:36.677 { 00:10:36.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.677 "dma_device_type": 2 00:10:36.677 }, 00:10:36.677 { 00:10:36.677 "dma_device_id": "system", 00:10:36.677 "dma_device_type": 1 00:10:36.677 }, 00:10:36.677 { 00:10:36.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.677 "dma_device_type": 2 00:10:36.677 }, 00:10:36.677 { 00:10:36.677 "dma_device_id": "system", 00:10:36.677 "dma_device_type": 1 00:10:36.677 }, 00:10:36.677 { 00:10:36.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.677 "dma_device_type": 2 00:10:36.677 }, 00:10:36.677 { 00:10:36.677 "dma_device_id": "system", 00:10:36.677 "dma_device_type": 1 00:10:36.677 }, 00:10:36.677 { 00:10:36.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.677 "dma_device_type": 2 00:10:36.677 } 00:10:36.677 ], 00:10:36.677 "driver_specific": { 00:10:36.677 "raid": { 00:10:36.677 "uuid": "60775da0-bdfc-485e-a165-a3173843ffa1", 00:10:36.677 "strip_size_kb": 64, 00:10:36.677 "state": "online", 00:10:36.677 "raid_level": "concat", 00:10:36.677 "superblock": true, 00:10:36.677 "num_base_bdevs": 4, 00:10:36.677 "num_base_bdevs_discovered": 4, 00:10:36.677 "num_base_bdevs_operational": 4, 00:10:36.677 "base_bdevs_list": [ 00:10:36.677 { 00:10:36.677 "name": "BaseBdev1", 00:10:36.677 "uuid": "0404fb21-2d84-4d50-a21d-d41ebb2a3817", 00:10:36.677 "is_configured": true, 00:10:36.677 "data_offset": 2048, 00:10:36.677 "data_size": 63488 00:10:36.677 }, 00:10:36.677 { 00:10:36.677 "name": "BaseBdev2", 00:10:36.677 "uuid": "2dd4e1e3-796d-4179-885c-ef53c86dd32c", 00:10:36.677 "is_configured": true, 00:10:36.677 "data_offset": 2048, 00:10:36.677 "data_size": 63488 00:10:36.677 }, 00:10:36.677 { 00:10:36.677 "name": "BaseBdev3", 00:10:36.677 "uuid": "2451cf36-0746-4ad9-9ccc-45aeff40915e", 00:10:36.677 "is_configured": true, 00:10:36.677 "data_offset": 2048, 00:10:36.677 "data_size": 63488 00:10:36.677 }, 00:10:36.677 { 00:10:36.677 "name": "BaseBdev4", 00:10:36.677 "uuid": "ad360f41-b2d7-4717-89fc-f126a5506b59", 00:10:36.677 "is_configured": true, 00:10:36.677 "data_offset": 2048, 00:10:36.677 "data_size": 63488 00:10:36.677 } 00:10:36.677 ] 00:10:36.677 } 00:10:36.677 } 00:10:36.677 }' 00:10:36.677 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:36.677 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:36.677 BaseBdev2 00:10:36.677 BaseBdev3 00:10:36.677 BaseBdev4' 00:10:36.677 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.677 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:36.677 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.677 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:36.677 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.677 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.678 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.678 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.678 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.678 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.678 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.678 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.678 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:36.678 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.678 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.678 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.678 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.678 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.678 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.678 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:36.678 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.678 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.678 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.678 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.678 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.678 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.678 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.678 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.678 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:36.678 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.678 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.938 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.938 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.938 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.938 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:36.938 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.938 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.938 [2024-11-19 12:30:41.973407] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:36.938 [2024-11-19 12:30:41.973518] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:36.938 [2024-11-19 12:30:41.973611] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:36.938 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.938 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:36.938 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:36.938 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:36.938 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:36.938 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:36.938 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:36.938 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.938 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:36.938 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.938 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.938 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.938 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.938 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.938 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.938 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.938 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.938 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.938 12:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.938 12:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.938 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.938 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.938 "name": "Existed_Raid", 00:10:36.938 "uuid": "60775da0-bdfc-485e-a165-a3173843ffa1", 00:10:36.938 "strip_size_kb": 64, 00:10:36.938 "state": "offline", 00:10:36.938 "raid_level": "concat", 00:10:36.938 "superblock": true, 00:10:36.938 "num_base_bdevs": 4, 00:10:36.938 "num_base_bdevs_discovered": 3, 00:10:36.938 "num_base_bdevs_operational": 3, 00:10:36.938 "base_bdevs_list": [ 00:10:36.938 { 00:10:36.938 "name": null, 00:10:36.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.938 "is_configured": false, 00:10:36.938 "data_offset": 0, 00:10:36.938 "data_size": 63488 00:10:36.938 }, 00:10:36.938 { 00:10:36.938 "name": "BaseBdev2", 00:10:36.938 "uuid": "2dd4e1e3-796d-4179-885c-ef53c86dd32c", 00:10:36.938 "is_configured": true, 00:10:36.938 "data_offset": 2048, 00:10:36.938 "data_size": 63488 00:10:36.938 }, 00:10:36.938 { 00:10:36.938 "name": "BaseBdev3", 00:10:36.938 "uuid": "2451cf36-0746-4ad9-9ccc-45aeff40915e", 00:10:36.938 "is_configured": true, 00:10:36.938 "data_offset": 2048, 00:10:36.938 "data_size": 63488 00:10:36.938 }, 00:10:36.938 { 00:10:36.938 "name": "BaseBdev4", 00:10:36.938 "uuid": "ad360f41-b2d7-4717-89fc-f126a5506b59", 00:10:36.938 "is_configured": true, 00:10:36.938 "data_offset": 2048, 00:10:36.938 "data_size": 63488 00:10:36.938 } 00:10:36.938 ] 00:10:36.938 }' 00:10:36.938 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.938 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.197 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:37.197 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.457 [2024-11-19 12:30:42.511974] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.457 [2024-11-19 12:30:42.583044] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.457 [2024-11-19 12:30:42.649904] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:37.457 [2024-11-19 12:30:42.650001] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.457 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.717 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:37.717 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:37.717 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:37.717 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:37.717 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:37.717 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:37.717 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.717 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.717 BaseBdev2 00:10:37.717 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.717 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:37.717 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:37.717 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:37.717 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:37.717 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:37.717 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:37.717 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:37.717 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.717 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.717 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.717 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:37.717 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.717 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.717 [ 00:10:37.717 { 00:10:37.717 "name": "BaseBdev2", 00:10:37.717 "aliases": [ 00:10:37.717 "065fc8b4-54fd-4307-bce9-b200cea756a4" 00:10:37.717 ], 00:10:37.717 "product_name": "Malloc disk", 00:10:37.717 "block_size": 512, 00:10:37.717 "num_blocks": 65536, 00:10:37.717 "uuid": "065fc8b4-54fd-4307-bce9-b200cea756a4", 00:10:37.717 "assigned_rate_limits": { 00:10:37.717 "rw_ios_per_sec": 0, 00:10:37.717 "rw_mbytes_per_sec": 0, 00:10:37.717 "r_mbytes_per_sec": 0, 00:10:37.717 "w_mbytes_per_sec": 0 00:10:37.717 }, 00:10:37.717 "claimed": false, 00:10:37.717 "zoned": false, 00:10:37.717 "supported_io_types": { 00:10:37.717 "read": true, 00:10:37.717 "write": true, 00:10:37.717 "unmap": true, 00:10:37.717 "flush": true, 00:10:37.717 "reset": true, 00:10:37.717 "nvme_admin": false, 00:10:37.717 "nvme_io": false, 00:10:37.717 "nvme_io_md": false, 00:10:37.717 "write_zeroes": true, 00:10:37.717 "zcopy": true, 00:10:37.717 "get_zone_info": false, 00:10:37.717 "zone_management": false, 00:10:37.717 "zone_append": false, 00:10:37.717 "compare": false, 00:10:37.717 "compare_and_write": false, 00:10:37.717 "abort": true, 00:10:37.717 "seek_hole": false, 00:10:37.717 "seek_data": false, 00:10:37.717 "copy": true, 00:10:37.717 "nvme_iov_md": false 00:10:37.717 }, 00:10:37.717 "memory_domains": [ 00:10:37.717 { 00:10:37.717 "dma_device_id": "system", 00:10:37.717 "dma_device_type": 1 00:10:37.717 }, 00:10:37.717 { 00:10:37.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.718 "dma_device_type": 2 00:10:37.718 } 00:10:37.718 ], 00:10:37.718 "driver_specific": {} 00:10:37.718 } 00:10:37.718 ] 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.718 BaseBdev3 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.718 [ 00:10:37.718 { 00:10:37.718 "name": "BaseBdev3", 00:10:37.718 "aliases": [ 00:10:37.718 "b1f7d185-687f-4011-95a3-ea4808bad8f0" 00:10:37.718 ], 00:10:37.718 "product_name": "Malloc disk", 00:10:37.718 "block_size": 512, 00:10:37.718 "num_blocks": 65536, 00:10:37.718 "uuid": "b1f7d185-687f-4011-95a3-ea4808bad8f0", 00:10:37.718 "assigned_rate_limits": { 00:10:37.718 "rw_ios_per_sec": 0, 00:10:37.718 "rw_mbytes_per_sec": 0, 00:10:37.718 "r_mbytes_per_sec": 0, 00:10:37.718 "w_mbytes_per_sec": 0 00:10:37.718 }, 00:10:37.718 "claimed": false, 00:10:37.718 "zoned": false, 00:10:37.718 "supported_io_types": { 00:10:37.718 "read": true, 00:10:37.718 "write": true, 00:10:37.718 "unmap": true, 00:10:37.718 "flush": true, 00:10:37.718 "reset": true, 00:10:37.718 "nvme_admin": false, 00:10:37.718 "nvme_io": false, 00:10:37.718 "nvme_io_md": false, 00:10:37.718 "write_zeroes": true, 00:10:37.718 "zcopy": true, 00:10:37.718 "get_zone_info": false, 00:10:37.718 "zone_management": false, 00:10:37.718 "zone_append": false, 00:10:37.718 "compare": false, 00:10:37.718 "compare_and_write": false, 00:10:37.718 "abort": true, 00:10:37.718 "seek_hole": false, 00:10:37.718 "seek_data": false, 00:10:37.718 "copy": true, 00:10:37.718 "nvme_iov_md": false 00:10:37.718 }, 00:10:37.718 "memory_domains": [ 00:10:37.718 { 00:10:37.718 "dma_device_id": "system", 00:10:37.718 "dma_device_type": 1 00:10:37.718 }, 00:10:37.718 { 00:10:37.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.718 "dma_device_type": 2 00:10:37.718 } 00:10:37.718 ], 00:10:37.718 "driver_specific": {} 00:10:37.718 } 00:10:37.718 ] 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.718 BaseBdev4 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.718 [ 00:10:37.718 { 00:10:37.718 "name": "BaseBdev4", 00:10:37.718 "aliases": [ 00:10:37.718 "abb35096-a7d2-407b-a449-ed462d6a6fed" 00:10:37.718 ], 00:10:37.718 "product_name": "Malloc disk", 00:10:37.718 "block_size": 512, 00:10:37.718 "num_blocks": 65536, 00:10:37.718 "uuid": "abb35096-a7d2-407b-a449-ed462d6a6fed", 00:10:37.718 "assigned_rate_limits": { 00:10:37.718 "rw_ios_per_sec": 0, 00:10:37.718 "rw_mbytes_per_sec": 0, 00:10:37.718 "r_mbytes_per_sec": 0, 00:10:37.718 "w_mbytes_per_sec": 0 00:10:37.718 }, 00:10:37.718 "claimed": false, 00:10:37.718 "zoned": false, 00:10:37.718 "supported_io_types": { 00:10:37.718 "read": true, 00:10:37.718 "write": true, 00:10:37.718 "unmap": true, 00:10:37.718 "flush": true, 00:10:37.718 "reset": true, 00:10:37.718 "nvme_admin": false, 00:10:37.718 "nvme_io": false, 00:10:37.718 "nvme_io_md": false, 00:10:37.718 "write_zeroes": true, 00:10:37.718 "zcopy": true, 00:10:37.718 "get_zone_info": false, 00:10:37.718 "zone_management": false, 00:10:37.718 "zone_append": false, 00:10:37.718 "compare": false, 00:10:37.718 "compare_and_write": false, 00:10:37.718 "abort": true, 00:10:37.718 "seek_hole": false, 00:10:37.718 "seek_data": false, 00:10:37.718 "copy": true, 00:10:37.718 "nvme_iov_md": false 00:10:37.718 }, 00:10:37.718 "memory_domains": [ 00:10:37.718 { 00:10:37.718 "dma_device_id": "system", 00:10:37.718 "dma_device_type": 1 00:10:37.718 }, 00:10:37.718 { 00:10:37.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.718 "dma_device_type": 2 00:10:37.718 } 00:10:37.718 ], 00:10:37.718 "driver_specific": {} 00:10:37.718 } 00:10:37.718 ] 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.718 [2024-11-19 12:30:42.878533] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:37.718 [2024-11-19 12:30:42.878659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:37.718 [2024-11-19 12:30:42.878703] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:37.718 [2024-11-19 12:30:42.880616] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:37.718 [2024-11-19 12:30:42.880707] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.718 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.719 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.719 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.719 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.719 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.719 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.719 "name": "Existed_Raid", 00:10:37.719 "uuid": "ec88c07d-f876-49fc-819d-b6e8bcf33573", 00:10:37.719 "strip_size_kb": 64, 00:10:37.719 "state": "configuring", 00:10:37.719 "raid_level": "concat", 00:10:37.719 "superblock": true, 00:10:37.719 "num_base_bdevs": 4, 00:10:37.719 "num_base_bdevs_discovered": 3, 00:10:37.719 "num_base_bdevs_operational": 4, 00:10:37.719 "base_bdevs_list": [ 00:10:37.719 { 00:10:37.719 "name": "BaseBdev1", 00:10:37.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.719 "is_configured": false, 00:10:37.719 "data_offset": 0, 00:10:37.719 "data_size": 0 00:10:37.719 }, 00:10:37.719 { 00:10:37.719 "name": "BaseBdev2", 00:10:37.719 "uuid": "065fc8b4-54fd-4307-bce9-b200cea756a4", 00:10:37.719 "is_configured": true, 00:10:37.719 "data_offset": 2048, 00:10:37.719 "data_size": 63488 00:10:37.719 }, 00:10:37.719 { 00:10:37.719 "name": "BaseBdev3", 00:10:37.719 "uuid": "b1f7d185-687f-4011-95a3-ea4808bad8f0", 00:10:37.719 "is_configured": true, 00:10:37.719 "data_offset": 2048, 00:10:37.719 "data_size": 63488 00:10:37.719 }, 00:10:37.719 { 00:10:37.719 "name": "BaseBdev4", 00:10:37.719 "uuid": "abb35096-a7d2-407b-a449-ed462d6a6fed", 00:10:37.719 "is_configured": true, 00:10:37.719 "data_offset": 2048, 00:10:37.719 "data_size": 63488 00:10:37.719 } 00:10:37.719 ] 00:10:37.719 }' 00:10:37.719 12:30:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.719 12:30:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.287 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:38.287 12:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.287 12:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.287 [2024-11-19 12:30:43.317813] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:38.287 12:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.287 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:38.287 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.287 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.287 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.287 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.287 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.287 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.287 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.287 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.287 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.287 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.287 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.287 12:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.287 12:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.287 12:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.287 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.287 "name": "Existed_Raid", 00:10:38.287 "uuid": "ec88c07d-f876-49fc-819d-b6e8bcf33573", 00:10:38.287 "strip_size_kb": 64, 00:10:38.287 "state": "configuring", 00:10:38.287 "raid_level": "concat", 00:10:38.287 "superblock": true, 00:10:38.287 "num_base_bdevs": 4, 00:10:38.287 "num_base_bdevs_discovered": 2, 00:10:38.287 "num_base_bdevs_operational": 4, 00:10:38.287 "base_bdevs_list": [ 00:10:38.287 { 00:10:38.287 "name": "BaseBdev1", 00:10:38.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.287 "is_configured": false, 00:10:38.287 "data_offset": 0, 00:10:38.287 "data_size": 0 00:10:38.287 }, 00:10:38.287 { 00:10:38.287 "name": null, 00:10:38.287 "uuid": "065fc8b4-54fd-4307-bce9-b200cea756a4", 00:10:38.287 "is_configured": false, 00:10:38.287 "data_offset": 0, 00:10:38.287 "data_size": 63488 00:10:38.287 }, 00:10:38.287 { 00:10:38.287 "name": "BaseBdev3", 00:10:38.287 "uuid": "b1f7d185-687f-4011-95a3-ea4808bad8f0", 00:10:38.287 "is_configured": true, 00:10:38.287 "data_offset": 2048, 00:10:38.287 "data_size": 63488 00:10:38.287 }, 00:10:38.287 { 00:10:38.287 "name": "BaseBdev4", 00:10:38.287 "uuid": "abb35096-a7d2-407b-a449-ed462d6a6fed", 00:10:38.287 "is_configured": true, 00:10:38.287 "data_offset": 2048, 00:10:38.287 "data_size": 63488 00:10:38.287 } 00:10:38.287 ] 00:10:38.287 }' 00:10:38.287 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.287 12:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.546 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.546 12:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.546 12:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.546 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:38.546 12:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.546 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:38.546 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:38.546 12:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.546 12:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.546 [2024-11-19 12:30:43.792158] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:38.546 BaseBdev1 00:10:38.546 12:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.546 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:38.546 12:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:38.546 12:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:38.546 12:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:38.546 12:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:38.546 12:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:38.546 12:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:38.546 12:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.546 12:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.546 12:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.858 12:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:38.858 12:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.858 12:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.858 [ 00:10:38.858 { 00:10:38.858 "name": "BaseBdev1", 00:10:38.858 "aliases": [ 00:10:38.858 "e0a23917-5295-4c40-a939-725563e8ab51" 00:10:38.858 ], 00:10:38.858 "product_name": "Malloc disk", 00:10:38.858 "block_size": 512, 00:10:38.858 "num_blocks": 65536, 00:10:38.858 "uuid": "e0a23917-5295-4c40-a939-725563e8ab51", 00:10:38.858 "assigned_rate_limits": { 00:10:38.858 "rw_ios_per_sec": 0, 00:10:38.858 "rw_mbytes_per_sec": 0, 00:10:38.858 "r_mbytes_per_sec": 0, 00:10:38.858 "w_mbytes_per_sec": 0 00:10:38.858 }, 00:10:38.858 "claimed": true, 00:10:38.858 "claim_type": "exclusive_write", 00:10:38.858 "zoned": false, 00:10:38.858 "supported_io_types": { 00:10:38.858 "read": true, 00:10:38.858 "write": true, 00:10:38.858 "unmap": true, 00:10:38.858 "flush": true, 00:10:38.858 "reset": true, 00:10:38.858 "nvme_admin": false, 00:10:38.858 "nvme_io": false, 00:10:38.858 "nvme_io_md": false, 00:10:38.858 "write_zeroes": true, 00:10:38.858 "zcopy": true, 00:10:38.858 "get_zone_info": false, 00:10:38.858 "zone_management": false, 00:10:38.858 "zone_append": false, 00:10:38.858 "compare": false, 00:10:38.858 "compare_and_write": false, 00:10:38.858 "abort": true, 00:10:38.858 "seek_hole": false, 00:10:38.858 "seek_data": false, 00:10:38.858 "copy": true, 00:10:38.858 "nvme_iov_md": false 00:10:38.858 }, 00:10:38.858 "memory_domains": [ 00:10:38.858 { 00:10:38.858 "dma_device_id": "system", 00:10:38.858 "dma_device_type": 1 00:10:38.858 }, 00:10:38.858 { 00:10:38.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.859 "dma_device_type": 2 00:10:38.859 } 00:10:38.859 ], 00:10:38.859 "driver_specific": {} 00:10:38.859 } 00:10:38.859 ] 00:10:38.859 12:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.859 12:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:38.859 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:38.859 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.859 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.859 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.859 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.859 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.859 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.859 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.859 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.859 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.859 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.859 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.859 12:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.859 12:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.859 12:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.859 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.859 "name": "Existed_Raid", 00:10:38.859 "uuid": "ec88c07d-f876-49fc-819d-b6e8bcf33573", 00:10:38.859 "strip_size_kb": 64, 00:10:38.859 "state": "configuring", 00:10:38.859 "raid_level": "concat", 00:10:38.859 "superblock": true, 00:10:38.859 "num_base_bdevs": 4, 00:10:38.859 "num_base_bdevs_discovered": 3, 00:10:38.859 "num_base_bdevs_operational": 4, 00:10:38.859 "base_bdevs_list": [ 00:10:38.859 { 00:10:38.859 "name": "BaseBdev1", 00:10:38.859 "uuid": "e0a23917-5295-4c40-a939-725563e8ab51", 00:10:38.859 "is_configured": true, 00:10:38.859 "data_offset": 2048, 00:10:38.859 "data_size": 63488 00:10:38.859 }, 00:10:38.859 { 00:10:38.859 "name": null, 00:10:38.859 "uuid": "065fc8b4-54fd-4307-bce9-b200cea756a4", 00:10:38.859 "is_configured": false, 00:10:38.859 "data_offset": 0, 00:10:38.859 "data_size": 63488 00:10:38.859 }, 00:10:38.859 { 00:10:38.859 "name": "BaseBdev3", 00:10:38.859 "uuid": "b1f7d185-687f-4011-95a3-ea4808bad8f0", 00:10:38.859 "is_configured": true, 00:10:38.859 "data_offset": 2048, 00:10:38.859 "data_size": 63488 00:10:38.859 }, 00:10:38.859 { 00:10:38.859 "name": "BaseBdev4", 00:10:38.859 "uuid": "abb35096-a7d2-407b-a449-ed462d6a6fed", 00:10:38.859 "is_configured": true, 00:10:38.859 "data_offset": 2048, 00:10:38.859 "data_size": 63488 00:10:38.859 } 00:10:38.859 ] 00:10:38.859 }' 00:10:38.859 12:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.859 12:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.118 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.118 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:39.118 12:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.118 12:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.118 12:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.118 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:39.118 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:39.118 12:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.118 12:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.118 [2024-11-19 12:30:44.319357] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:39.118 12:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.118 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:39.118 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.118 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.118 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.118 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.118 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.118 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.118 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.118 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.118 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.118 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.118 12:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.118 12:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.118 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.118 12:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.118 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.118 "name": "Existed_Raid", 00:10:39.118 "uuid": "ec88c07d-f876-49fc-819d-b6e8bcf33573", 00:10:39.118 "strip_size_kb": 64, 00:10:39.119 "state": "configuring", 00:10:39.119 "raid_level": "concat", 00:10:39.119 "superblock": true, 00:10:39.119 "num_base_bdevs": 4, 00:10:39.119 "num_base_bdevs_discovered": 2, 00:10:39.119 "num_base_bdevs_operational": 4, 00:10:39.119 "base_bdevs_list": [ 00:10:39.119 { 00:10:39.119 "name": "BaseBdev1", 00:10:39.119 "uuid": "e0a23917-5295-4c40-a939-725563e8ab51", 00:10:39.119 "is_configured": true, 00:10:39.119 "data_offset": 2048, 00:10:39.119 "data_size": 63488 00:10:39.119 }, 00:10:39.119 { 00:10:39.119 "name": null, 00:10:39.119 "uuid": "065fc8b4-54fd-4307-bce9-b200cea756a4", 00:10:39.119 "is_configured": false, 00:10:39.119 "data_offset": 0, 00:10:39.119 "data_size": 63488 00:10:39.119 }, 00:10:39.119 { 00:10:39.119 "name": null, 00:10:39.119 "uuid": "b1f7d185-687f-4011-95a3-ea4808bad8f0", 00:10:39.119 "is_configured": false, 00:10:39.119 "data_offset": 0, 00:10:39.119 "data_size": 63488 00:10:39.119 }, 00:10:39.119 { 00:10:39.119 "name": "BaseBdev4", 00:10:39.119 "uuid": "abb35096-a7d2-407b-a449-ed462d6a6fed", 00:10:39.119 "is_configured": true, 00:10:39.119 "data_offset": 2048, 00:10:39.119 "data_size": 63488 00:10:39.119 } 00:10:39.119 ] 00:10:39.119 }' 00:10:39.119 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.119 12:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.686 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.686 12:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.686 12:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.686 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:39.686 12:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.686 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:39.686 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:39.686 12:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.686 12:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.686 [2024-11-19 12:30:44.854531] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:39.686 12:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.686 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:39.686 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.686 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.686 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.686 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.686 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.686 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.686 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.686 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.686 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.686 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.686 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.686 12:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.686 12:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.686 12:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.686 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.686 "name": "Existed_Raid", 00:10:39.686 "uuid": "ec88c07d-f876-49fc-819d-b6e8bcf33573", 00:10:39.686 "strip_size_kb": 64, 00:10:39.686 "state": "configuring", 00:10:39.686 "raid_level": "concat", 00:10:39.686 "superblock": true, 00:10:39.686 "num_base_bdevs": 4, 00:10:39.686 "num_base_bdevs_discovered": 3, 00:10:39.686 "num_base_bdevs_operational": 4, 00:10:39.686 "base_bdevs_list": [ 00:10:39.686 { 00:10:39.686 "name": "BaseBdev1", 00:10:39.686 "uuid": "e0a23917-5295-4c40-a939-725563e8ab51", 00:10:39.686 "is_configured": true, 00:10:39.686 "data_offset": 2048, 00:10:39.686 "data_size": 63488 00:10:39.686 }, 00:10:39.686 { 00:10:39.686 "name": null, 00:10:39.686 "uuid": "065fc8b4-54fd-4307-bce9-b200cea756a4", 00:10:39.686 "is_configured": false, 00:10:39.686 "data_offset": 0, 00:10:39.686 "data_size": 63488 00:10:39.686 }, 00:10:39.686 { 00:10:39.686 "name": "BaseBdev3", 00:10:39.686 "uuid": "b1f7d185-687f-4011-95a3-ea4808bad8f0", 00:10:39.686 "is_configured": true, 00:10:39.686 "data_offset": 2048, 00:10:39.686 "data_size": 63488 00:10:39.686 }, 00:10:39.686 { 00:10:39.686 "name": "BaseBdev4", 00:10:39.686 "uuid": "abb35096-a7d2-407b-a449-ed462d6a6fed", 00:10:39.686 "is_configured": true, 00:10:39.686 "data_offset": 2048, 00:10:39.686 "data_size": 63488 00:10:39.686 } 00:10:39.686 ] 00:10:39.686 }' 00:10:39.686 12:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.686 12:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.253 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:40.253 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.253 12:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.253 12:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.253 12:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.253 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:40.253 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:40.253 12:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.253 12:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.253 [2024-11-19 12:30:45.389625] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:40.253 12:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.253 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:40.253 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.253 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.253 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.253 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.253 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.253 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.253 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.253 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.253 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.253 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.253 12:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.253 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.253 12:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.253 12:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.253 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.253 "name": "Existed_Raid", 00:10:40.253 "uuid": "ec88c07d-f876-49fc-819d-b6e8bcf33573", 00:10:40.253 "strip_size_kb": 64, 00:10:40.253 "state": "configuring", 00:10:40.253 "raid_level": "concat", 00:10:40.253 "superblock": true, 00:10:40.253 "num_base_bdevs": 4, 00:10:40.253 "num_base_bdevs_discovered": 2, 00:10:40.253 "num_base_bdevs_operational": 4, 00:10:40.253 "base_bdevs_list": [ 00:10:40.253 { 00:10:40.253 "name": null, 00:10:40.253 "uuid": "e0a23917-5295-4c40-a939-725563e8ab51", 00:10:40.253 "is_configured": false, 00:10:40.253 "data_offset": 0, 00:10:40.253 "data_size": 63488 00:10:40.253 }, 00:10:40.253 { 00:10:40.253 "name": null, 00:10:40.253 "uuid": "065fc8b4-54fd-4307-bce9-b200cea756a4", 00:10:40.253 "is_configured": false, 00:10:40.253 "data_offset": 0, 00:10:40.253 "data_size": 63488 00:10:40.253 }, 00:10:40.253 { 00:10:40.253 "name": "BaseBdev3", 00:10:40.253 "uuid": "b1f7d185-687f-4011-95a3-ea4808bad8f0", 00:10:40.253 "is_configured": true, 00:10:40.253 "data_offset": 2048, 00:10:40.253 "data_size": 63488 00:10:40.253 }, 00:10:40.253 { 00:10:40.253 "name": "BaseBdev4", 00:10:40.253 "uuid": "abb35096-a7d2-407b-a449-ed462d6a6fed", 00:10:40.253 "is_configured": true, 00:10:40.253 "data_offset": 2048, 00:10:40.253 "data_size": 63488 00:10:40.253 } 00:10:40.253 ] 00:10:40.253 }' 00:10:40.253 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.253 12:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.821 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.821 12:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.821 12:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.822 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:40.822 12:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.822 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:40.822 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:40.822 12:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.822 12:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.822 [2024-11-19 12:30:45.931313] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:40.822 12:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.822 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:40.822 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.822 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.822 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.822 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.822 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.822 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.822 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.822 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.822 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.822 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.822 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.822 12:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.822 12:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.822 12:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.822 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.822 "name": "Existed_Raid", 00:10:40.822 "uuid": "ec88c07d-f876-49fc-819d-b6e8bcf33573", 00:10:40.822 "strip_size_kb": 64, 00:10:40.822 "state": "configuring", 00:10:40.822 "raid_level": "concat", 00:10:40.822 "superblock": true, 00:10:40.822 "num_base_bdevs": 4, 00:10:40.822 "num_base_bdevs_discovered": 3, 00:10:40.822 "num_base_bdevs_operational": 4, 00:10:40.822 "base_bdevs_list": [ 00:10:40.822 { 00:10:40.822 "name": null, 00:10:40.822 "uuid": "e0a23917-5295-4c40-a939-725563e8ab51", 00:10:40.822 "is_configured": false, 00:10:40.822 "data_offset": 0, 00:10:40.822 "data_size": 63488 00:10:40.822 }, 00:10:40.822 { 00:10:40.822 "name": "BaseBdev2", 00:10:40.822 "uuid": "065fc8b4-54fd-4307-bce9-b200cea756a4", 00:10:40.822 "is_configured": true, 00:10:40.822 "data_offset": 2048, 00:10:40.822 "data_size": 63488 00:10:40.822 }, 00:10:40.822 { 00:10:40.822 "name": "BaseBdev3", 00:10:40.822 "uuid": "b1f7d185-687f-4011-95a3-ea4808bad8f0", 00:10:40.822 "is_configured": true, 00:10:40.822 "data_offset": 2048, 00:10:40.822 "data_size": 63488 00:10:40.822 }, 00:10:40.822 { 00:10:40.822 "name": "BaseBdev4", 00:10:40.822 "uuid": "abb35096-a7d2-407b-a449-ed462d6a6fed", 00:10:40.822 "is_configured": true, 00:10:40.822 "data_offset": 2048, 00:10:40.822 "data_size": 63488 00:10:40.822 } 00:10:40.822 ] 00:10:40.822 }' 00:10:40.822 12:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.822 12:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.391 12:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e0a23917-5295-4c40-a939-725563e8ab51 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.392 [2024-11-19 12:30:46.465733] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:41.392 [2024-11-19 12:30:46.465962] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:41.392 [2024-11-19 12:30:46.465976] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:41.392 [2024-11-19 12:30:46.466217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:41.392 NewBaseBdev 00:10:41.392 [2024-11-19 12:30:46.466336] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:41.392 [2024-11-19 12:30:46.466348] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:41.392 [2024-11-19 12:30:46.466461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.392 [ 00:10:41.392 { 00:10:41.392 "name": "NewBaseBdev", 00:10:41.392 "aliases": [ 00:10:41.392 "e0a23917-5295-4c40-a939-725563e8ab51" 00:10:41.392 ], 00:10:41.392 "product_name": "Malloc disk", 00:10:41.392 "block_size": 512, 00:10:41.392 "num_blocks": 65536, 00:10:41.392 "uuid": "e0a23917-5295-4c40-a939-725563e8ab51", 00:10:41.392 "assigned_rate_limits": { 00:10:41.392 "rw_ios_per_sec": 0, 00:10:41.392 "rw_mbytes_per_sec": 0, 00:10:41.392 "r_mbytes_per_sec": 0, 00:10:41.392 "w_mbytes_per_sec": 0 00:10:41.392 }, 00:10:41.392 "claimed": true, 00:10:41.392 "claim_type": "exclusive_write", 00:10:41.392 "zoned": false, 00:10:41.392 "supported_io_types": { 00:10:41.392 "read": true, 00:10:41.392 "write": true, 00:10:41.392 "unmap": true, 00:10:41.392 "flush": true, 00:10:41.392 "reset": true, 00:10:41.392 "nvme_admin": false, 00:10:41.392 "nvme_io": false, 00:10:41.392 "nvme_io_md": false, 00:10:41.392 "write_zeroes": true, 00:10:41.392 "zcopy": true, 00:10:41.392 "get_zone_info": false, 00:10:41.392 "zone_management": false, 00:10:41.392 "zone_append": false, 00:10:41.392 "compare": false, 00:10:41.392 "compare_and_write": false, 00:10:41.392 "abort": true, 00:10:41.392 "seek_hole": false, 00:10:41.392 "seek_data": false, 00:10:41.392 "copy": true, 00:10:41.392 "nvme_iov_md": false 00:10:41.392 }, 00:10:41.392 "memory_domains": [ 00:10:41.392 { 00:10:41.392 "dma_device_id": "system", 00:10:41.392 "dma_device_type": 1 00:10:41.392 }, 00:10:41.392 { 00:10:41.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.392 "dma_device_type": 2 00:10:41.392 } 00:10:41.392 ], 00:10:41.392 "driver_specific": {} 00:10:41.392 } 00:10:41.392 ] 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.392 "name": "Existed_Raid", 00:10:41.392 "uuid": "ec88c07d-f876-49fc-819d-b6e8bcf33573", 00:10:41.392 "strip_size_kb": 64, 00:10:41.392 "state": "online", 00:10:41.392 "raid_level": "concat", 00:10:41.392 "superblock": true, 00:10:41.392 "num_base_bdevs": 4, 00:10:41.392 "num_base_bdevs_discovered": 4, 00:10:41.392 "num_base_bdevs_operational": 4, 00:10:41.392 "base_bdevs_list": [ 00:10:41.392 { 00:10:41.392 "name": "NewBaseBdev", 00:10:41.392 "uuid": "e0a23917-5295-4c40-a939-725563e8ab51", 00:10:41.392 "is_configured": true, 00:10:41.392 "data_offset": 2048, 00:10:41.392 "data_size": 63488 00:10:41.392 }, 00:10:41.392 { 00:10:41.392 "name": "BaseBdev2", 00:10:41.392 "uuid": "065fc8b4-54fd-4307-bce9-b200cea756a4", 00:10:41.392 "is_configured": true, 00:10:41.392 "data_offset": 2048, 00:10:41.392 "data_size": 63488 00:10:41.392 }, 00:10:41.392 { 00:10:41.392 "name": "BaseBdev3", 00:10:41.392 "uuid": "b1f7d185-687f-4011-95a3-ea4808bad8f0", 00:10:41.392 "is_configured": true, 00:10:41.392 "data_offset": 2048, 00:10:41.392 "data_size": 63488 00:10:41.392 }, 00:10:41.392 { 00:10:41.392 "name": "BaseBdev4", 00:10:41.392 "uuid": "abb35096-a7d2-407b-a449-ed462d6a6fed", 00:10:41.392 "is_configured": true, 00:10:41.392 "data_offset": 2048, 00:10:41.392 "data_size": 63488 00:10:41.392 } 00:10:41.392 ] 00:10:41.392 }' 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.392 12:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.961 12:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:41.961 12:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:41.961 12:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:41.961 12:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:41.961 12:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:41.961 12:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:41.961 12:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:41.961 12:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.961 12:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.961 12:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:41.961 [2024-11-19 12:30:46.933295] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:41.961 12:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.961 12:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:41.961 "name": "Existed_Raid", 00:10:41.961 "aliases": [ 00:10:41.961 "ec88c07d-f876-49fc-819d-b6e8bcf33573" 00:10:41.961 ], 00:10:41.961 "product_name": "Raid Volume", 00:10:41.961 "block_size": 512, 00:10:41.961 "num_blocks": 253952, 00:10:41.961 "uuid": "ec88c07d-f876-49fc-819d-b6e8bcf33573", 00:10:41.961 "assigned_rate_limits": { 00:10:41.961 "rw_ios_per_sec": 0, 00:10:41.961 "rw_mbytes_per_sec": 0, 00:10:41.961 "r_mbytes_per_sec": 0, 00:10:41.961 "w_mbytes_per_sec": 0 00:10:41.961 }, 00:10:41.961 "claimed": false, 00:10:41.961 "zoned": false, 00:10:41.961 "supported_io_types": { 00:10:41.961 "read": true, 00:10:41.961 "write": true, 00:10:41.961 "unmap": true, 00:10:41.961 "flush": true, 00:10:41.961 "reset": true, 00:10:41.961 "nvme_admin": false, 00:10:41.961 "nvme_io": false, 00:10:41.961 "nvme_io_md": false, 00:10:41.961 "write_zeroes": true, 00:10:41.961 "zcopy": false, 00:10:41.961 "get_zone_info": false, 00:10:41.961 "zone_management": false, 00:10:41.961 "zone_append": false, 00:10:41.961 "compare": false, 00:10:41.961 "compare_and_write": false, 00:10:41.961 "abort": false, 00:10:41.961 "seek_hole": false, 00:10:41.961 "seek_data": false, 00:10:41.961 "copy": false, 00:10:41.961 "nvme_iov_md": false 00:10:41.961 }, 00:10:41.961 "memory_domains": [ 00:10:41.961 { 00:10:41.961 "dma_device_id": "system", 00:10:41.961 "dma_device_type": 1 00:10:41.961 }, 00:10:41.961 { 00:10:41.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.961 "dma_device_type": 2 00:10:41.961 }, 00:10:41.961 { 00:10:41.961 "dma_device_id": "system", 00:10:41.961 "dma_device_type": 1 00:10:41.961 }, 00:10:41.961 { 00:10:41.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.961 "dma_device_type": 2 00:10:41.961 }, 00:10:41.961 { 00:10:41.961 "dma_device_id": "system", 00:10:41.961 "dma_device_type": 1 00:10:41.961 }, 00:10:41.961 { 00:10:41.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.961 "dma_device_type": 2 00:10:41.961 }, 00:10:41.961 { 00:10:41.961 "dma_device_id": "system", 00:10:41.961 "dma_device_type": 1 00:10:41.961 }, 00:10:41.961 { 00:10:41.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.961 "dma_device_type": 2 00:10:41.961 } 00:10:41.961 ], 00:10:41.961 "driver_specific": { 00:10:41.961 "raid": { 00:10:41.961 "uuid": "ec88c07d-f876-49fc-819d-b6e8bcf33573", 00:10:41.961 "strip_size_kb": 64, 00:10:41.961 "state": "online", 00:10:41.961 "raid_level": "concat", 00:10:41.961 "superblock": true, 00:10:41.961 "num_base_bdevs": 4, 00:10:41.961 "num_base_bdevs_discovered": 4, 00:10:41.961 "num_base_bdevs_operational": 4, 00:10:41.961 "base_bdevs_list": [ 00:10:41.961 { 00:10:41.961 "name": "NewBaseBdev", 00:10:41.961 "uuid": "e0a23917-5295-4c40-a939-725563e8ab51", 00:10:41.961 "is_configured": true, 00:10:41.961 "data_offset": 2048, 00:10:41.961 "data_size": 63488 00:10:41.961 }, 00:10:41.961 { 00:10:41.961 "name": "BaseBdev2", 00:10:41.961 "uuid": "065fc8b4-54fd-4307-bce9-b200cea756a4", 00:10:41.961 "is_configured": true, 00:10:41.961 "data_offset": 2048, 00:10:41.961 "data_size": 63488 00:10:41.961 }, 00:10:41.961 { 00:10:41.961 "name": "BaseBdev3", 00:10:41.961 "uuid": "b1f7d185-687f-4011-95a3-ea4808bad8f0", 00:10:41.961 "is_configured": true, 00:10:41.961 "data_offset": 2048, 00:10:41.961 "data_size": 63488 00:10:41.961 }, 00:10:41.961 { 00:10:41.961 "name": "BaseBdev4", 00:10:41.961 "uuid": "abb35096-a7d2-407b-a449-ed462d6a6fed", 00:10:41.961 "is_configured": true, 00:10:41.961 "data_offset": 2048, 00:10:41.961 "data_size": 63488 00:10:41.961 } 00:10:41.961 ] 00:10:41.961 } 00:10:41.961 } 00:10:41.961 }' 00:10:41.961 12:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:41.961 12:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:41.961 BaseBdev2 00:10:41.961 BaseBdev3 00:10:41.961 BaseBdev4' 00:10:41.961 12:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.961 12:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:41.961 12:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.961 12:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:41.961 12:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.961 12:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.961 12:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.961 12:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.961 12:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.961 12:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.961 12:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.961 12:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:41.961 12:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.961 12:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.961 12:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.961 12:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.961 12:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.961 12:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.961 12:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.961 12:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:41.961 12:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.961 12:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.961 12:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.961 12:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.221 12:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.221 12:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.221 12:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.221 12:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:42.221 12:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.221 12:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.221 12:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.221 12:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.221 12:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.221 12:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.221 12:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:42.221 12:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.221 12:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.221 [2024-11-19 12:30:47.284377] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:42.221 [2024-11-19 12:30:47.284457] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:42.221 [2024-11-19 12:30:47.284557] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:42.221 [2024-11-19 12:30:47.284656] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:42.221 [2024-11-19 12:30:47.284666] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:42.221 12:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.221 12:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83006 00:10:42.221 12:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 83006 ']' 00:10:42.221 12:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 83006 00:10:42.221 12:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:42.221 12:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:42.221 12:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83006 00:10:42.221 killing process with pid 83006 00:10:42.221 12:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:42.221 12:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:42.221 12:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83006' 00:10:42.221 12:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 83006 00:10:42.221 [2024-11-19 12:30:47.333812] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:42.221 12:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 83006 00:10:42.221 [2024-11-19 12:30:47.376248] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:42.480 12:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:42.480 00:10:42.480 real 0m9.775s 00:10:42.480 user 0m16.619s 00:10:42.481 sys 0m2.130s 00:10:42.481 ************************************ 00:10:42.481 END TEST raid_state_function_test_sb 00:10:42.481 ************************************ 00:10:42.481 12:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:42.481 12:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.481 12:30:47 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:10:42.481 12:30:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:42.481 12:30:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:42.481 12:30:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:42.481 ************************************ 00:10:42.481 START TEST raid_superblock_test 00:10:42.481 ************************************ 00:10:42.481 12:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:10:42.481 12:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:42.481 12:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:42.481 12:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:42.481 12:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:42.481 12:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:42.481 12:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:42.481 12:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:42.481 12:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:42.481 12:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:42.481 12:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:42.481 12:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:42.481 12:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:42.481 12:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:42.481 12:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:42.481 12:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:42.481 12:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:42.481 12:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83654 00:10:42.481 12:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:42.481 12:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83654 00:10:42.481 12:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 83654 ']' 00:10:42.481 12:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.481 12:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:42.481 12:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.481 12:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:42.481 12:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.741 [2024-11-19 12:30:47.787119] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:42.741 [2024-11-19 12:30:47.787329] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83654 ] 00:10:42.741 [2024-11-19 12:30:47.950059] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.000 [2024-11-19 12:30:48.003475] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.000 [2024-11-19 12:30:48.046440] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.000 [2024-11-19 12:30:48.046485] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.571 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:43.571 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:43.571 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:43.571 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:43.571 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:43.571 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:43.571 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:43.571 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:43.571 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:43.571 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:43.571 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:43.571 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.571 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.571 malloc1 00:10:43.571 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.571 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:43.571 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.572 [2024-11-19 12:30:48.641760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:43.572 [2024-11-19 12:30:48.641956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.572 [2024-11-19 12:30:48.642003] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:43.572 [2024-11-19 12:30:48.642041] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.572 [2024-11-19 12:30:48.644268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.572 [2024-11-19 12:30:48.644354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:43.572 pt1 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.572 malloc2 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.572 [2024-11-19 12:30:48.684515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:43.572 [2024-11-19 12:30:48.684675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.572 [2024-11-19 12:30:48.684700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:43.572 [2024-11-19 12:30:48.684711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.572 [2024-11-19 12:30:48.686949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.572 [2024-11-19 12:30:48.686990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:43.572 pt2 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.572 malloc3 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.572 [2024-11-19 12:30:48.713535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:43.572 [2024-11-19 12:30:48.713676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.572 [2024-11-19 12:30:48.713713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:43.572 [2024-11-19 12:30:48.713753] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.572 [2024-11-19 12:30:48.715829] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.572 [2024-11-19 12:30:48.715919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:43.572 pt3 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.572 malloc4 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.572 [2024-11-19 12:30:48.746584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:43.572 [2024-11-19 12:30:48.746755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.572 [2024-11-19 12:30:48.746818] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:43.572 [2024-11-19 12:30:48.746857] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.572 [2024-11-19 12:30:48.748927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.572 [2024-11-19 12:30:48.749002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:43.572 pt4 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.572 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.572 [2024-11-19 12:30:48.758653] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:43.572 [2024-11-19 12:30:48.760627] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:43.572 [2024-11-19 12:30:48.760732] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:43.572 [2024-11-19 12:30:48.760829] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:43.572 [2024-11-19 12:30:48.761031] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:43.572 [2024-11-19 12:30:48.761081] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:43.572 [2024-11-19 12:30:48.761368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:43.572 [2024-11-19 12:30:48.761544] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:43.573 [2024-11-19 12:30:48.761585] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:43.573 [2024-11-19 12:30:48.761765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.573 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.573 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:43.573 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:43.573 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:43.573 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.573 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.573 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.573 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.573 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.573 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.573 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.573 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.573 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.573 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:43.573 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.573 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.573 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.573 "name": "raid_bdev1", 00:10:43.573 "uuid": "5d270023-eead-4fde-8f89-f44865489a79", 00:10:43.573 "strip_size_kb": 64, 00:10:43.573 "state": "online", 00:10:43.573 "raid_level": "concat", 00:10:43.573 "superblock": true, 00:10:43.573 "num_base_bdevs": 4, 00:10:43.573 "num_base_bdevs_discovered": 4, 00:10:43.573 "num_base_bdevs_operational": 4, 00:10:43.573 "base_bdevs_list": [ 00:10:43.573 { 00:10:43.573 "name": "pt1", 00:10:43.573 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:43.573 "is_configured": true, 00:10:43.573 "data_offset": 2048, 00:10:43.573 "data_size": 63488 00:10:43.573 }, 00:10:43.573 { 00:10:43.573 "name": "pt2", 00:10:43.573 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:43.573 "is_configured": true, 00:10:43.573 "data_offset": 2048, 00:10:43.573 "data_size": 63488 00:10:43.573 }, 00:10:43.573 { 00:10:43.573 "name": "pt3", 00:10:43.573 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:43.573 "is_configured": true, 00:10:43.573 "data_offset": 2048, 00:10:43.573 "data_size": 63488 00:10:43.573 }, 00:10:43.573 { 00:10:43.573 "name": "pt4", 00:10:43.573 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:43.573 "is_configured": true, 00:10:43.573 "data_offset": 2048, 00:10:43.573 "data_size": 63488 00:10:43.573 } 00:10:43.573 ] 00:10:43.573 }' 00:10:43.573 12:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.573 12:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.143 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:44.143 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:44.143 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:44.143 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:44.143 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:44.143 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:44.143 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:44.143 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:44.143 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.143 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.143 [2024-11-19 12:30:49.182283] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:44.143 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.143 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:44.143 "name": "raid_bdev1", 00:10:44.143 "aliases": [ 00:10:44.143 "5d270023-eead-4fde-8f89-f44865489a79" 00:10:44.143 ], 00:10:44.143 "product_name": "Raid Volume", 00:10:44.143 "block_size": 512, 00:10:44.143 "num_blocks": 253952, 00:10:44.143 "uuid": "5d270023-eead-4fde-8f89-f44865489a79", 00:10:44.143 "assigned_rate_limits": { 00:10:44.143 "rw_ios_per_sec": 0, 00:10:44.143 "rw_mbytes_per_sec": 0, 00:10:44.143 "r_mbytes_per_sec": 0, 00:10:44.143 "w_mbytes_per_sec": 0 00:10:44.143 }, 00:10:44.143 "claimed": false, 00:10:44.143 "zoned": false, 00:10:44.143 "supported_io_types": { 00:10:44.143 "read": true, 00:10:44.143 "write": true, 00:10:44.143 "unmap": true, 00:10:44.143 "flush": true, 00:10:44.143 "reset": true, 00:10:44.143 "nvme_admin": false, 00:10:44.143 "nvme_io": false, 00:10:44.143 "nvme_io_md": false, 00:10:44.143 "write_zeroes": true, 00:10:44.143 "zcopy": false, 00:10:44.143 "get_zone_info": false, 00:10:44.143 "zone_management": false, 00:10:44.143 "zone_append": false, 00:10:44.143 "compare": false, 00:10:44.143 "compare_and_write": false, 00:10:44.143 "abort": false, 00:10:44.143 "seek_hole": false, 00:10:44.143 "seek_data": false, 00:10:44.143 "copy": false, 00:10:44.144 "nvme_iov_md": false 00:10:44.144 }, 00:10:44.144 "memory_domains": [ 00:10:44.144 { 00:10:44.144 "dma_device_id": "system", 00:10:44.144 "dma_device_type": 1 00:10:44.144 }, 00:10:44.144 { 00:10:44.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.144 "dma_device_type": 2 00:10:44.144 }, 00:10:44.144 { 00:10:44.144 "dma_device_id": "system", 00:10:44.144 "dma_device_type": 1 00:10:44.144 }, 00:10:44.144 { 00:10:44.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.144 "dma_device_type": 2 00:10:44.144 }, 00:10:44.144 { 00:10:44.144 "dma_device_id": "system", 00:10:44.144 "dma_device_type": 1 00:10:44.144 }, 00:10:44.144 { 00:10:44.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.144 "dma_device_type": 2 00:10:44.144 }, 00:10:44.144 { 00:10:44.144 "dma_device_id": "system", 00:10:44.144 "dma_device_type": 1 00:10:44.144 }, 00:10:44.144 { 00:10:44.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.144 "dma_device_type": 2 00:10:44.144 } 00:10:44.144 ], 00:10:44.144 "driver_specific": { 00:10:44.144 "raid": { 00:10:44.144 "uuid": "5d270023-eead-4fde-8f89-f44865489a79", 00:10:44.144 "strip_size_kb": 64, 00:10:44.144 "state": "online", 00:10:44.144 "raid_level": "concat", 00:10:44.144 "superblock": true, 00:10:44.144 "num_base_bdevs": 4, 00:10:44.144 "num_base_bdevs_discovered": 4, 00:10:44.144 "num_base_bdevs_operational": 4, 00:10:44.144 "base_bdevs_list": [ 00:10:44.144 { 00:10:44.144 "name": "pt1", 00:10:44.145 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:44.145 "is_configured": true, 00:10:44.145 "data_offset": 2048, 00:10:44.145 "data_size": 63488 00:10:44.145 }, 00:10:44.145 { 00:10:44.145 "name": "pt2", 00:10:44.145 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:44.145 "is_configured": true, 00:10:44.145 "data_offset": 2048, 00:10:44.145 "data_size": 63488 00:10:44.145 }, 00:10:44.145 { 00:10:44.145 "name": "pt3", 00:10:44.145 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:44.145 "is_configured": true, 00:10:44.145 "data_offset": 2048, 00:10:44.145 "data_size": 63488 00:10:44.145 }, 00:10:44.145 { 00:10:44.145 "name": "pt4", 00:10:44.145 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:44.145 "is_configured": true, 00:10:44.145 "data_offset": 2048, 00:10:44.145 "data_size": 63488 00:10:44.145 } 00:10:44.145 ] 00:10:44.145 } 00:10:44.145 } 00:10:44.145 }' 00:10:44.145 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:44.145 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:44.145 pt2 00:10:44.145 pt3 00:10:44.145 pt4' 00:10:44.145 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.145 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:44.145 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.145 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.145 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:44.145 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.145 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.145 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.145 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.145 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.145 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.145 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:44.145 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.145 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.145 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.145 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.145 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.145 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.146 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.146 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.146 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:44.146 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.146 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.406 [2024-11-19 12:30:49.501724] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5d270023-eead-4fde-8f89-f44865489a79 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5d270023-eead-4fde-8f89-f44865489a79 ']' 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.406 [2024-11-19 12:30:49.541291] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:44.406 [2024-11-19 12:30:49.541406] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:44.406 [2024-11-19 12:30:49.541532] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:44.406 [2024-11-19 12:30:49.541627] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:44.406 [2024-11-19 12:30:49.541682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.406 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.667 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.667 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:44.667 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:44.667 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:44.667 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:44.667 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:44.667 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:44.667 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:44.667 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:44.667 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:44.667 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.667 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.667 [2024-11-19 12:30:49.697067] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:44.667 [2024-11-19 12:30:49.699415] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:44.667 [2024-11-19 12:30:49.699548] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:44.667 [2024-11-19 12:30:49.699589] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:44.667 [2024-11-19 12:30:49.699669] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:44.667 [2024-11-19 12:30:49.699726] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:44.667 [2024-11-19 12:30:49.699771] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:44.668 [2024-11-19 12:30:49.699794] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:44.668 [2024-11-19 12:30:49.699813] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:44.668 [2024-11-19 12:30:49.699825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:10:44.668 request: 00:10:44.668 { 00:10:44.668 "name": "raid_bdev1", 00:10:44.668 "raid_level": "concat", 00:10:44.668 "base_bdevs": [ 00:10:44.668 "malloc1", 00:10:44.668 "malloc2", 00:10:44.668 "malloc3", 00:10:44.668 "malloc4" 00:10:44.668 ], 00:10:44.668 "strip_size_kb": 64, 00:10:44.668 "superblock": false, 00:10:44.668 "method": "bdev_raid_create", 00:10:44.668 "req_id": 1 00:10:44.668 } 00:10:44.668 Got JSON-RPC error response 00:10:44.668 response: 00:10:44.668 { 00:10:44.668 "code": -17, 00:10:44.668 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:44.668 } 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.668 [2024-11-19 12:30:49.760899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:44.668 [2024-11-19 12:30:49.761012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.668 [2024-11-19 12:30:49.761078] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:44.668 [2024-11-19 12:30:49.761121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.668 [2024-11-19 12:30:49.763860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.668 [2024-11-19 12:30:49.763956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:44.668 [2024-11-19 12:30:49.764080] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:44.668 [2024-11-19 12:30:49.764169] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:44.668 pt1 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.668 "name": "raid_bdev1", 00:10:44.668 "uuid": "5d270023-eead-4fde-8f89-f44865489a79", 00:10:44.668 "strip_size_kb": 64, 00:10:44.668 "state": "configuring", 00:10:44.668 "raid_level": "concat", 00:10:44.668 "superblock": true, 00:10:44.668 "num_base_bdevs": 4, 00:10:44.668 "num_base_bdevs_discovered": 1, 00:10:44.668 "num_base_bdevs_operational": 4, 00:10:44.668 "base_bdevs_list": [ 00:10:44.668 { 00:10:44.668 "name": "pt1", 00:10:44.668 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:44.668 "is_configured": true, 00:10:44.668 "data_offset": 2048, 00:10:44.668 "data_size": 63488 00:10:44.668 }, 00:10:44.668 { 00:10:44.668 "name": null, 00:10:44.668 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:44.668 "is_configured": false, 00:10:44.668 "data_offset": 2048, 00:10:44.668 "data_size": 63488 00:10:44.668 }, 00:10:44.668 { 00:10:44.668 "name": null, 00:10:44.668 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:44.668 "is_configured": false, 00:10:44.668 "data_offset": 2048, 00:10:44.668 "data_size": 63488 00:10:44.668 }, 00:10:44.668 { 00:10:44.668 "name": null, 00:10:44.668 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:44.668 "is_configured": false, 00:10:44.668 "data_offset": 2048, 00:10:44.668 "data_size": 63488 00:10:44.668 } 00:10:44.668 ] 00:10:44.668 }' 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.668 12:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.933 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:44.933 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:44.933 12:30:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.933 12:30:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.933 [2024-11-19 12:30:50.172276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:44.933 [2024-11-19 12:30:50.172366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.933 [2024-11-19 12:30:50.172399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:44.933 [2024-11-19 12:30:50.172412] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.934 [2024-11-19 12:30:50.172981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.934 [2024-11-19 12:30:50.173011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:44.934 [2024-11-19 12:30:50.173122] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:44.934 [2024-11-19 12:30:50.173162] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:44.934 pt2 00:10:44.934 12:30:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.934 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:44.934 12:30:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.934 12:30:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.934 [2024-11-19 12:30:50.184235] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:44.934 12:30:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.934 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:44.934 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.934 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.934 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.934 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.934 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.934 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.934 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.193 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.193 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.193 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.193 12:30:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.193 12:30:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.193 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.193 12:30:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.193 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.193 "name": "raid_bdev1", 00:10:45.193 "uuid": "5d270023-eead-4fde-8f89-f44865489a79", 00:10:45.193 "strip_size_kb": 64, 00:10:45.193 "state": "configuring", 00:10:45.193 "raid_level": "concat", 00:10:45.193 "superblock": true, 00:10:45.193 "num_base_bdevs": 4, 00:10:45.193 "num_base_bdevs_discovered": 1, 00:10:45.193 "num_base_bdevs_operational": 4, 00:10:45.193 "base_bdevs_list": [ 00:10:45.193 { 00:10:45.193 "name": "pt1", 00:10:45.193 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:45.193 "is_configured": true, 00:10:45.193 "data_offset": 2048, 00:10:45.193 "data_size": 63488 00:10:45.193 }, 00:10:45.193 { 00:10:45.193 "name": null, 00:10:45.193 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:45.193 "is_configured": false, 00:10:45.193 "data_offset": 0, 00:10:45.193 "data_size": 63488 00:10:45.193 }, 00:10:45.193 { 00:10:45.193 "name": null, 00:10:45.193 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:45.193 "is_configured": false, 00:10:45.193 "data_offset": 2048, 00:10:45.193 "data_size": 63488 00:10:45.193 }, 00:10:45.193 { 00:10:45.193 "name": null, 00:10:45.193 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:45.193 "is_configured": false, 00:10:45.193 "data_offset": 2048, 00:10:45.193 "data_size": 63488 00:10:45.193 } 00:10:45.193 ] 00:10:45.193 }' 00:10:45.193 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.193 12:30:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.453 [2024-11-19 12:30:50.563651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:45.453 [2024-11-19 12:30:50.563759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.453 [2024-11-19 12:30:50.563785] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:45.453 [2024-11-19 12:30:50.563800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.453 [2024-11-19 12:30:50.564348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.453 [2024-11-19 12:30:50.564395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:45.453 [2024-11-19 12:30:50.564497] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:45.453 [2024-11-19 12:30:50.564528] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:45.453 pt2 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.453 [2024-11-19 12:30:50.575548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:45.453 [2024-11-19 12:30:50.575620] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.453 [2024-11-19 12:30:50.575643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:45.453 [2024-11-19 12:30:50.575657] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.453 [2024-11-19 12:30:50.576073] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.453 [2024-11-19 12:30:50.576096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:45.453 [2024-11-19 12:30:50.576172] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:45.453 [2024-11-19 12:30:50.576197] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:45.453 pt3 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.453 [2024-11-19 12:30:50.587531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:45.453 [2024-11-19 12:30:50.587596] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.453 [2024-11-19 12:30:50.587615] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:45.453 [2024-11-19 12:30:50.587628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.453 [2024-11-19 12:30:50.588032] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.453 [2024-11-19 12:30:50.588061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:45.453 [2024-11-19 12:30:50.588142] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:45.453 [2024-11-19 12:30:50.588167] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:45.453 [2024-11-19 12:30:50.588285] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:45.453 [2024-11-19 12:30:50.588302] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:45.453 [2024-11-19 12:30:50.588569] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:45.453 [2024-11-19 12:30:50.588719] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:45.453 [2024-11-19 12:30:50.588739] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:10:45.453 [2024-11-19 12:30:50.588883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.453 pt4 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.453 "name": "raid_bdev1", 00:10:45.453 "uuid": "5d270023-eead-4fde-8f89-f44865489a79", 00:10:45.453 "strip_size_kb": 64, 00:10:45.453 "state": "online", 00:10:45.453 "raid_level": "concat", 00:10:45.453 "superblock": true, 00:10:45.453 "num_base_bdevs": 4, 00:10:45.453 "num_base_bdevs_discovered": 4, 00:10:45.453 "num_base_bdevs_operational": 4, 00:10:45.453 "base_bdevs_list": [ 00:10:45.453 { 00:10:45.453 "name": "pt1", 00:10:45.453 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:45.453 "is_configured": true, 00:10:45.453 "data_offset": 2048, 00:10:45.453 "data_size": 63488 00:10:45.453 }, 00:10:45.453 { 00:10:45.453 "name": "pt2", 00:10:45.453 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:45.453 "is_configured": true, 00:10:45.453 "data_offset": 2048, 00:10:45.453 "data_size": 63488 00:10:45.453 }, 00:10:45.453 { 00:10:45.453 "name": "pt3", 00:10:45.453 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:45.453 "is_configured": true, 00:10:45.453 "data_offset": 2048, 00:10:45.453 "data_size": 63488 00:10:45.453 }, 00:10:45.453 { 00:10:45.453 "name": "pt4", 00:10:45.453 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:45.453 "is_configured": true, 00:10:45.453 "data_offset": 2048, 00:10:45.453 "data_size": 63488 00:10:45.453 } 00:10:45.453 ] 00:10:45.453 }' 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.453 12:30:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.022 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:46.022 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:46.022 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:46.022 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:46.022 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:46.022 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:46.022 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:46.022 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:46.022 12:30:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.022 12:30:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.022 [2024-11-19 12:30:51.027257] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.022 12:30:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.022 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:46.022 "name": "raid_bdev1", 00:10:46.022 "aliases": [ 00:10:46.022 "5d270023-eead-4fde-8f89-f44865489a79" 00:10:46.022 ], 00:10:46.022 "product_name": "Raid Volume", 00:10:46.022 "block_size": 512, 00:10:46.022 "num_blocks": 253952, 00:10:46.022 "uuid": "5d270023-eead-4fde-8f89-f44865489a79", 00:10:46.022 "assigned_rate_limits": { 00:10:46.022 "rw_ios_per_sec": 0, 00:10:46.022 "rw_mbytes_per_sec": 0, 00:10:46.022 "r_mbytes_per_sec": 0, 00:10:46.022 "w_mbytes_per_sec": 0 00:10:46.022 }, 00:10:46.022 "claimed": false, 00:10:46.022 "zoned": false, 00:10:46.022 "supported_io_types": { 00:10:46.022 "read": true, 00:10:46.022 "write": true, 00:10:46.022 "unmap": true, 00:10:46.022 "flush": true, 00:10:46.022 "reset": true, 00:10:46.022 "nvme_admin": false, 00:10:46.022 "nvme_io": false, 00:10:46.022 "nvme_io_md": false, 00:10:46.022 "write_zeroes": true, 00:10:46.022 "zcopy": false, 00:10:46.022 "get_zone_info": false, 00:10:46.022 "zone_management": false, 00:10:46.022 "zone_append": false, 00:10:46.022 "compare": false, 00:10:46.022 "compare_and_write": false, 00:10:46.022 "abort": false, 00:10:46.022 "seek_hole": false, 00:10:46.022 "seek_data": false, 00:10:46.022 "copy": false, 00:10:46.022 "nvme_iov_md": false 00:10:46.022 }, 00:10:46.022 "memory_domains": [ 00:10:46.022 { 00:10:46.022 "dma_device_id": "system", 00:10:46.022 "dma_device_type": 1 00:10:46.022 }, 00:10:46.022 { 00:10:46.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.022 "dma_device_type": 2 00:10:46.022 }, 00:10:46.022 { 00:10:46.022 "dma_device_id": "system", 00:10:46.022 "dma_device_type": 1 00:10:46.022 }, 00:10:46.022 { 00:10:46.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.022 "dma_device_type": 2 00:10:46.022 }, 00:10:46.022 { 00:10:46.022 "dma_device_id": "system", 00:10:46.022 "dma_device_type": 1 00:10:46.022 }, 00:10:46.023 { 00:10:46.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.023 "dma_device_type": 2 00:10:46.023 }, 00:10:46.023 { 00:10:46.023 "dma_device_id": "system", 00:10:46.023 "dma_device_type": 1 00:10:46.023 }, 00:10:46.023 { 00:10:46.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.023 "dma_device_type": 2 00:10:46.023 } 00:10:46.023 ], 00:10:46.023 "driver_specific": { 00:10:46.023 "raid": { 00:10:46.023 "uuid": "5d270023-eead-4fde-8f89-f44865489a79", 00:10:46.023 "strip_size_kb": 64, 00:10:46.023 "state": "online", 00:10:46.023 "raid_level": "concat", 00:10:46.023 "superblock": true, 00:10:46.023 "num_base_bdevs": 4, 00:10:46.023 "num_base_bdevs_discovered": 4, 00:10:46.023 "num_base_bdevs_operational": 4, 00:10:46.023 "base_bdevs_list": [ 00:10:46.023 { 00:10:46.023 "name": "pt1", 00:10:46.023 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:46.023 "is_configured": true, 00:10:46.023 "data_offset": 2048, 00:10:46.023 "data_size": 63488 00:10:46.023 }, 00:10:46.023 { 00:10:46.023 "name": "pt2", 00:10:46.023 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:46.023 "is_configured": true, 00:10:46.023 "data_offset": 2048, 00:10:46.023 "data_size": 63488 00:10:46.023 }, 00:10:46.023 { 00:10:46.023 "name": "pt3", 00:10:46.023 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:46.023 "is_configured": true, 00:10:46.023 "data_offset": 2048, 00:10:46.023 "data_size": 63488 00:10:46.023 }, 00:10:46.023 { 00:10:46.023 "name": "pt4", 00:10:46.023 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:46.023 "is_configured": true, 00:10:46.023 "data_offset": 2048, 00:10:46.023 "data_size": 63488 00:10:46.023 } 00:10:46.023 ] 00:10:46.023 } 00:10:46.023 } 00:10:46.023 }' 00:10:46.023 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:46.023 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:46.023 pt2 00:10:46.023 pt3 00:10:46.023 pt4' 00:10:46.023 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.023 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:46.023 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.023 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:46.023 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.023 12:30:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.023 12:30:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.023 12:30:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.023 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.023 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.023 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.023 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:46.023 12:30:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.023 12:30:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.023 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.023 12:30:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.023 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.023 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.023 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.023 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:46.023 12:30:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.023 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.023 12:30:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.023 12:30:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.023 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.023 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.023 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.283 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:46.283 12:30:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.283 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.283 12:30:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.283 12:30:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.283 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.283 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.283 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:46.283 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:46.283 12:30:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.283 12:30:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.283 [2024-11-19 12:30:51.342701] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.283 12:30:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.283 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5d270023-eead-4fde-8f89-f44865489a79 '!=' 5d270023-eead-4fde-8f89-f44865489a79 ']' 00:10:46.283 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:46.283 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:46.283 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:46.283 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83654 00:10:46.283 12:30:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 83654 ']' 00:10:46.283 12:30:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 83654 00:10:46.283 12:30:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:46.283 12:30:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:46.283 12:30:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83654 00:10:46.283 12:30:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:46.283 12:30:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:46.283 killing process with pid 83654 00:10:46.283 12:30:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83654' 00:10:46.283 12:30:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 83654 00:10:46.283 [2024-11-19 12:30:51.422374] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:46.283 12:30:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 83654 00:10:46.283 [2024-11-19 12:30:51.422522] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.283 [2024-11-19 12:30:51.422620] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:46.283 [2024-11-19 12:30:51.422641] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:10:46.283 [2024-11-19 12:30:51.504724] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:46.894 12:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:46.894 00:10:46.894 real 0m4.197s 00:10:46.894 user 0m6.411s 00:10:46.894 sys 0m0.966s 00:10:46.894 12:30:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:46.894 12:30:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.894 ************************************ 00:10:46.894 END TEST raid_superblock_test 00:10:46.894 ************************************ 00:10:46.895 12:30:51 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:10:46.895 12:30:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:46.895 12:30:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:46.895 12:30:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:46.895 ************************************ 00:10:46.895 START TEST raid_read_error_test 00:10:46.895 ************************************ 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZcQFI0ztrt 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83902 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83902 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 83902 ']' 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:46.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.895 12:30:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:46.895 12:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.895 [2024-11-19 12:30:52.105267] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:46.895 [2024-11-19 12:30:52.105406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83902 ] 00:10:47.154 [2024-11-19 12:30:52.251452] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.154 [2024-11-19 12:30:52.336313] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.414 [2024-11-19 12:30:52.418344] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:47.414 [2024-11-19 12:30:52.418392] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:47.983 12:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:47.983 12:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:47.983 12:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:47.983 12:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:47.983 12:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.983 12:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.983 BaseBdev1_malloc 00:10:47.983 12:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.983 12:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:47.983 12:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.983 12:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.983 true 00:10:47.983 12:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.983 12:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:47.983 12:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.983 12:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.983 [2024-11-19 12:30:52.993094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:47.983 [2024-11-19 12:30:52.993166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.983 [2024-11-19 12:30:52.993184] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:47.983 [2024-11-19 12:30:52.993193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.983 [2024-11-19 12:30:52.995260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.983 [2024-11-19 12:30:52.995294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:47.983 BaseBdev1 00:10:47.983 12:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.983 12:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:47.983 12:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:47.983 12:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.983 12:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.983 BaseBdev2_malloc 00:10:47.983 12:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.983 12:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.984 true 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.984 [2024-11-19 12:30:53.050415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:47.984 [2024-11-19 12:30:53.050477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.984 [2024-11-19 12:30:53.050503] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:47.984 [2024-11-19 12:30:53.050515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.984 [2024-11-19 12:30:53.053482] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.984 [2024-11-19 12:30:53.053526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:47.984 BaseBdev2 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.984 BaseBdev3_malloc 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.984 true 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.984 [2024-11-19 12:30:53.091207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:47.984 [2024-11-19 12:30:53.091254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.984 [2024-11-19 12:30:53.091273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:47.984 [2024-11-19 12:30:53.091281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.984 [2024-11-19 12:30:53.093342] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.984 [2024-11-19 12:30:53.093376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:47.984 BaseBdev3 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.984 BaseBdev4_malloc 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.984 true 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.984 [2024-11-19 12:30:53.131804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:47.984 [2024-11-19 12:30:53.131848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.984 [2024-11-19 12:30:53.131869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:47.984 [2024-11-19 12:30:53.131878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.984 [2024-11-19 12:30:53.133866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.984 [2024-11-19 12:30:53.133898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:47.984 BaseBdev4 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.984 [2024-11-19 12:30:53.143840] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:47.984 [2024-11-19 12:30:53.145672] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:47.984 [2024-11-19 12:30:53.145771] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:47.984 [2024-11-19 12:30:53.145825] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:47.984 [2024-11-19 12:30:53.146016] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:47.984 [2024-11-19 12:30:53.146035] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:47.984 [2024-11-19 12:30:53.146278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:47.984 [2024-11-19 12:30:53.146418] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:47.984 [2024-11-19 12:30:53.146440] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:47.984 [2024-11-19 12:30:53.146561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.984 12:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.984 "name": "raid_bdev1", 00:10:47.984 "uuid": "3072db20-f001-4bb2-bcb6-11d4d75d218b", 00:10:47.984 "strip_size_kb": 64, 00:10:47.984 "state": "online", 00:10:47.984 "raid_level": "concat", 00:10:47.984 "superblock": true, 00:10:47.984 "num_base_bdevs": 4, 00:10:47.984 "num_base_bdevs_discovered": 4, 00:10:47.984 "num_base_bdevs_operational": 4, 00:10:47.984 "base_bdevs_list": [ 00:10:47.984 { 00:10:47.984 "name": "BaseBdev1", 00:10:47.984 "uuid": "26e05823-77b0-55fc-b668-2cb33391f853", 00:10:47.984 "is_configured": true, 00:10:47.984 "data_offset": 2048, 00:10:47.984 "data_size": 63488 00:10:47.984 }, 00:10:47.984 { 00:10:47.984 "name": "BaseBdev2", 00:10:47.984 "uuid": "6b37ba2c-760a-50fe-bd0c-a71b90f360e7", 00:10:47.984 "is_configured": true, 00:10:47.984 "data_offset": 2048, 00:10:47.984 "data_size": 63488 00:10:47.984 }, 00:10:47.984 { 00:10:47.984 "name": "BaseBdev3", 00:10:47.984 "uuid": "0596e0dd-91b3-5756-930d-bab796578a16", 00:10:47.984 "is_configured": true, 00:10:47.984 "data_offset": 2048, 00:10:47.985 "data_size": 63488 00:10:47.985 }, 00:10:47.985 { 00:10:47.985 "name": "BaseBdev4", 00:10:47.985 "uuid": "0c09ddbe-e6aa-58f3-99e1-0e9516dde8c2", 00:10:47.985 "is_configured": true, 00:10:47.985 "data_offset": 2048, 00:10:47.985 "data_size": 63488 00:10:47.985 } 00:10:47.985 ] 00:10:47.985 }' 00:10:47.985 12:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.985 12:30:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.553 12:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:48.553 12:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:48.553 [2024-11-19 12:30:53.699362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:49.490 12:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:49.490 12:30:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.490 12:30:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.490 12:30:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.490 12:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:49.490 12:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:49.490 12:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:49.490 12:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:49.490 12:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:49.490 12:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:49.490 12:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.490 12:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.490 12:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.490 12:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.490 12:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.490 12:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.490 12:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.490 12:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.490 12:30:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.490 12:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.490 12:30:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.490 12:30:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.491 12:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.491 "name": "raid_bdev1", 00:10:49.491 "uuid": "3072db20-f001-4bb2-bcb6-11d4d75d218b", 00:10:49.491 "strip_size_kb": 64, 00:10:49.491 "state": "online", 00:10:49.491 "raid_level": "concat", 00:10:49.491 "superblock": true, 00:10:49.491 "num_base_bdevs": 4, 00:10:49.491 "num_base_bdevs_discovered": 4, 00:10:49.491 "num_base_bdevs_operational": 4, 00:10:49.491 "base_bdevs_list": [ 00:10:49.491 { 00:10:49.491 "name": "BaseBdev1", 00:10:49.491 "uuid": "26e05823-77b0-55fc-b668-2cb33391f853", 00:10:49.491 "is_configured": true, 00:10:49.491 "data_offset": 2048, 00:10:49.491 "data_size": 63488 00:10:49.491 }, 00:10:49.491 { 00:10:49.491 "name": "BaseBdev2", 00:10:49.491 "uuid": "6b37ba2c-760a-50fe-bd0c-a71b90f360e7", 00:10:49.491 "is_configured": true, 00:10:49.491 "data_offset": 2048, 00:10:49.491 "data_size": 63488 00:10:49.491 }, 00:10:49.491 { 00:10:49.491 "name": "BaseBdev3", 00:10:49.491 "uuid": "0596e0dd-91b3-5756-930d-bab796578a16", 00:10:49.491 "is_configured": true, 00:10:49.491 "data_offset": 2048, 00:10:49.491 "data_size": 63488 00:10:49.491 }, 00:10:49.491 { 00:10:49.491 "name": "BaseBdev4", 00:10:49.491 "uuid": "0c09ddbe-e6aa-58f3-99e1-0e9516dde8c2", 00:10:49.491 "is_configured": true, 00:10:49.491 "data_offset": 2048, 00:10:49.491 "data_size": 63488 00:10:49.491 } 00:10:49.491 ] 00:10:49.491 }' 00:10:49.491 12:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.491 12:30:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.059 12:30:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:50.059 12:30:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.059 12:30:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.059 [2024-11-19 12:30:55.087260] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:50.059 [2024-11-19 12:30:55.087310] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:50.059 [2024-11-19 12:30:55.089804] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:50.059 [2024-11-19 12:30:55.089858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.059 [2024-11-19 12:30:55.089902] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:50.059 [2024-11-19 12:30:55.089910] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:50.059 { 00:10:50.059 "results": [ 00:10:50.059 { 00:10:50.059 "job": "raid_bdev1", 00:10:50.059 "core_mask": "0x1", 00:10:50.059 "workload": "randrw", 00:10:50.059 "percentage": 50, 00:10:50.059 "status": "finished", 00:10:50.059 "queue_depth": 1, 00:10:50.059 "io_size": 131072, 00:10:50.059 "runtime": 1.388752, 00:10:50.059 "iops": 16528.509049851953, 00:10:50.059 "mibps": 2066.063631231494, 00:10:50.059 "io_failed": 1, 00:10:50.059 "io_timeout": 0, 00:10:50.059 "avg_latency_us": 83.99378362259937, 00:10:50.059 "min_latency_us": 24.593886462882097, 00:10:50.059 "max_latency_us": 1395.1441048034935 00:10:50.059 } 00:10:50.059 ], 00:10:50.059 "core_count": 1 00:10:50.059 } 00:10:50.060 12:30:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.060 12:30:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83902 00:10:50.060 12:30:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 83902 ']' 00:10:50.060 12:30:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 83902 00:10:50.060 12:30:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:50.060 12:30:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:50.060 12:30:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83902 00:10:50.060 12:30:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:50.060 12:30:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:50.060 killing process with pid 83902 00:10:50.060 12:30:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83902' 00:10:50.060 12:30:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 83902 00:10:50.060 [2024-11-19 12:30:55.126103] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:50.060 12:30:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 83902 00:10:50.060 [2024-11-19 12:30:55.161947] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:50.320 12:30:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZcQFI0ztrt 00:10:50.320 12:30:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:50.320 12:30:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:50.320 12:30:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:50.320 12:30:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:50.320 12:30:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:50.320 12:30:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:50.320 12:30:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:50.320 00:10:50.320 real 0m3.436s 00:10:50.320 user 0m4.244s 00:10:50.320 sys 0m0.681s 00:10:50.320 12:30:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:50.320 12:30:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.320 ************************************ 00:10:50.320 END TEST raid_read_error_test 00:10:50.320 ************************************ 00:10:50.320 12:30:55 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:10:50.320 12:30:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:50.320 12:30:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:50.320 12:30:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:50.320 ************************************ 00:10:50.320 START TEST raid_write_error_test 00:10:50.320 ************************************ 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LEvziNNZB2 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=84038 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 84038 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 84038 ']' 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:50.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:50.320 12:30:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.580 [2024-11-19 12:30:55.595540] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:50.580 [2024-11-19 12:30:55.595663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84038 ] 00:10:50.580 [2024-11-19 12:30:55.754771] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.580 [2024-11-19 12:30:55.801346] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.841 [2024-11-19 12:30:55.846044] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:50.841 [2024-11-19 12:30:55.846085] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.409 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:51.409 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:51.409 12:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:51.409 12:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:51.409 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.410 BaseBdev1_malloc 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.410 true 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.410 [2024-11-19 12:30:56.433405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:51.410 [2024-11-19 12:30:56.433467] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.410 [2024-11-19 12:30:56.433491] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:51.410 [2024-11-19 12:30:56.433501] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.410 [2024-11-19 12:30:56.435617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.410 [2024-11-19 12:30:56.435653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:51.410 BaseBdev1 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.410 BaseBdev2_malloc 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.410 true 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.410 [2024-11-19 12:30:56.493108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:51.410 [2024-11-19 12:30:56.493180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.410 [2024-11-19 12:30:56.493210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:51.410 [2024-11-19 12:30:56.493224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.410 [2024-11-19 12:30:56.496438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.410 [2024-11-19 12:30:56.496480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:51.410 BaseBdev2 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.410 BaseBdev3_malloc 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.410 true 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.410 [2024-11-19 12:30:56.534089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:51.410 [2024-11-19 12:30:56.534135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.410 [2024-11-19 12:30:56.534154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:51.410 [2024-11-19 12:30:56.534163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.410 [2024-11-19 12:30:56.536163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.410 [2024-11-19 12:30:56.536196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:51.410 BaseBdev3 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.410 BaseBdev4_malloc 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.410 true 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.410 [2024-11-19 12:30:56.574886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:51.410 [2024-11-19 12:30:56.574999] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.410 [2024-11-19 12:30:56.575027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:51.410 [2024-11-19 12:30:56.575037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.410 [2024-11-19 12:30:56.577059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.410 [2024-11-19 12:30:56.577096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:51.410 BaseBdev4 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.410 [2024-11-19 12:30:56.586919] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:51.410 [2024-11-19 12:30:56.588726] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:51.410 [2024-11-19 12:30:56.588829] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:51.410 [2024-11-19 12:30:56.588883] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:51.410 [2024-11-19 12:30:56.589074] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:51.410 [2024-11-19 12:30:56.589093] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:51.410 [2024-11-19 12:30:56.589333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:51.410 [2024-11-19 12:30:56.589464] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:51.410 [2024-11-19 12:30:56.589480] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:51.410 [2024-11-19 12:30:56.589601] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.410 12:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.411 12:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.411 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.411 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.411 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.411 12:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.411 "name": "raid_bdev1", 00:10:51.411 "uuid": "bb117373-c989-48f8-81da-729bb416db07", 00:10:51.411 "strip_size_kb": 64, 00:10:51.411 "state": "online", 00:10:51.411 "raid_level": "concat", 00:10:51.411 "superblock": true, 00:10:51.411 "num_base_bdevs": 4, 00:10:51.411 "num_base_bdevs_discovered": 4, 00:10:51.411 "num_base_bdevs_operational": 4, 00:10:51.411 "base_bdevs_list": [ 00:10:51.411 { 00:10:51.411 "name": "BaseBdev1", 00:10:51.411 "uuid": "27443e5c-2deb-598f-ae53-1c196e96dd0a", 00:10:51.411 "is_configured": true, 00:10:51.411 "data_offset": 2048, 00:10:51.411 "data_size": 63488 00:10:51.411 }, 00:10:51.411 { 00:10:51.411 "name": "BaseBdev2", 00:10:51.411 "uuid": "9c6a3fdb-409f-5961-9370-7b9be0d28c17", 00:10:51.411 "is_configured": true, 00:10:51.411 "data_offset": 2048, 00:10:51.411 "data_size": 63488 00:10:51.411 }, 00:10:51.411 { 00:10:51.411 "name": "BaseBdev3", 00:10:51.411 "uuid": "29d854c4-edfe-513b-88de-266342fec156", 00:10:51.411 "is_configured": true, 00:10:51.411 "data_offset": 2048, 00:10:51.411 "data_size": 63488 00:10:51.411 }, 00:10:51.411 { 00:10:51.411 "name": "BaseBdev4", 00:10:51.411 "uuid": "185af2bd-b4c7-540c-af9b-380f8aacf2ca", 00:10:51.411 "is_configured": true, 00:10:51.411 "data_offset": 2048, 00:10:51.411 "data_size": 63488 00:10:51.411 } 00:10:51.411 ] 00:10:51.411 }' 00:10:51.411 12:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.411 12:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.979 12:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:51.979 12:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:51.979 [2024-11-19 12:30:57.202284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:52.918 12:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:52.918 12:30:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.918 12:30:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.918 12:30:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.918 12:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:52.918 12:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:52.918 12:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:52.918 12:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:52.918 12:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:52.918 12:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.918 12:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.918 12:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.918 12:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.918 12:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.918 12:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.918 12:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.918 12:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.918 12:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.918 12:30:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.918 12:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.918 12:30:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.918 12:30:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.918 12:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.918 "name": "raid_bdev1", 00:10:52.918 "uuid": "bb117373-c989-48f8-81da-729bb416db07", 00:10:52.918 "strip_size_kb": 64, 00:10:52.918 "state": "online", 00:10:52.918 "raid_level": "concat", 00:10:52.918 "superblock": true, 00:10:52.918 "num_base_bdevs": 4, 00:10:52.918 "num_base_bdevs_discovered": 4, 00:10:52.918 "num_base_bdevs_operational": 4, 00:10:52.918 "base_bdevs_list": [ 00:10:52.918 { 00:10:52.918 "name": "BaseBdev1", 00:10:52.918 "uuid": "27443e5c-2deb-598f-ae53-1c196e96dd0a", 00:10:52.918 "is_configured": true, 00:10:52.918 "data_offset": 2048, 00:10:52.918 "data_size": 63488 00:10:52.918 }, 00:10:52.918 { 00:10:52.918 "name": "BaseBdev2", 00:10:52.918 "uuid": "9c6a3fdb-409f-5961-9370-7b9be0d28c17", 00:10:52.918 "is_configured": true, 00:10:52.918 "data_offset": 2048, 00:10:52.918 "data_size": 63488 00:10:52.918 }, 00:10:52.918 { 00:10:52.918 "name": "BaseBdev3", 00:10:52.918 "uuid": "29d854c4-edfe-513b-88de-266342fec156", 00:10:52.918 "is_configured": true, 00:10:52.918 "data_offset": 2048, 00:10:52.918 "data_size": 63488 00:10:52.919 }, 00:10:52.919 { 00:10:52.919 "name": "BaseBdev4", 00:10:52.919 "uuid": "185af2bd-b4c7-540c-af9b-380f8aacf2ca", 00:10:52.919 "is_configured": true, 00:10:52.919 "data_offset": 2048, 00:10:52.919 "data_size": 63488 00:10:52.919 } 00:10:52.919 ] 00:10:52.919 }' 00:10:52.919 12:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.919 12:30:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.489 12:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:53.489 12:30:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.489 12:30:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.489 [2024-11-19 12:30:58.553857] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:53.489 [2024-11-19 12:30:58.553916] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:53.489 [2024-11-19 12:30:58.556417] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:53.489 [2024-11-19 12:30:58.556476] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.489 [2024-11-19 12:30:58.556520] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:53.489 [2024-11-19 12:30:58.556530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:53.489 { 00:10:53.489 "results": [ 00:10:53.489 { 00:10:53.489 "job": "raid_bdev1", 00:10:53.489 "core_mask": "0x1", 00:10:53.489 "workload": "randrw", 00:10:53.489 "percentage": 50, 00:10:53.489 "status": "finished", 00:10:53.489 "queue_depth": 1, 00:10:53.489 "io_size": 131072, 00:10:53.489 "runtime": 1.352339, 00:10:53.489 "iops": 16537.273568239918, 00:10:53.489 "mibps": 2067.1591960299897, 00:10:53.489 "io_failed": 1, 00:10:53.489 "io_timeout": 0, 00:10:53.489 "avg_latency_us": 83.99315571253821, 00:10:53.489 "min_latency_us": 24.705676855895195, 00:10:53.489 "max_latency_us": 1402.2986899563318 00:10:53.489 } 00:10:53.489 ], 00:10:53.489 "core_count": 1 00:10:53.489 } 00:10:53.489 12:30:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.489 12:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 84038 00:10:53.489 12:30:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 84038 ']' 00:10:53.489 12:30:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 84038 00:10:53.489 12:30:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:53.489 12:30:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:53.489 12:30:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84038 00:10:53.489 12:30:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:53.489 12:30:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:53.489 12:30:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84038' 00:10:53.489 killing process with pid 84038 00:10:53.489 12:30:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 84038 00:10:53.489 [2024-11-19 12:30:58.605701] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:53.489 12:30:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 84038 00:10:53.489 [2024-11-19 12:30:58.640672] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:53.749 12:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LEvziNNZB2 00:10:53.749 12:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:53.749 12:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:53.749 12:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:53.749 12:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:53.749 12:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:53.749 12:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:53.749 12:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:53.749 ************************************ 00:10:53.749 END TEST raid_write_error_test 00:10:53.749 ************************************ 00:10:53.749 00:10:53.749 real 0m3.409s 00:10:53.749 user 0m4.299s 00:10:53.749 sys 0m0.589s 00:10:53.749 12:30:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:53.749 12:30:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.749 12:30:58 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:53.749 12:30:58 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:10:53.749 12:30:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:53.749 12:30:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:53.749 12:30:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:53.749 ************************************ 00:10:53.749 START TEST raid_state_function_test 00:10:53.749 ************************************ 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:53.749 Process raid pid: 84165 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=84165 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84165' 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 84165 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 84165 ']' 00:10:53.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:53.749 12:30:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.010 [2024-11-19 12:30:59.071930] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:54.010 [2024-11-19 12:30:59.072130] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:54.010 [2024-11-19 12:30:59.214477] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.010 [2024-11-19 12:30:59.263118] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.269 [2024-11-19 12:30:59.307062] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.269 [2024-11-19 12:30:59.307099] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.836 12:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:54.836 12:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:54.836 12:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:54.836 12:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.836 12:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.836 [2024-11-19 12:30:59.900946] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:54.836 [2024-11-19 12:30:59.901001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:54.836 [2024-11-19 12:30:59.901013] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:54.836 [2024-11-19 12:30:59.901023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:54.836 [2024-11-19 12:30:59.901031] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:54.836 [2024-11-19 12:30:59.901045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:54.836 [2024-11-19 12:30:59.901051] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:54.836 [2024-11-19 12:30:59.901059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:54.836 12:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.836 12:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:54.836 12:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.836 12:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.836 12:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:54.836 12:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:54.836 12:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.836 12:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.837 12:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.837 12:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.837 12:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.837 12:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.837 12:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.837 12:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.837 12:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.837 12:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.837 12:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.837 "name": "Existed_Raid", 00:10:54.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.837 "strip_size_kb": 0, 00:10:54.837 "state": "configuring", 00:10:54.837 "raid_level": "raid1", 00:10:54.837 "superblock": false, 00:10:54.837 "num_base_bdevs": 4, 00:10:54.837 "num_base_bdevs_discovered": 0, 00:10:54.837 "num_base_bdevs_operational": 4, 00:10:54.837 "base_bdevs_list": [ 00:10:54.837 { 00:10:54.837 "name": "BaseBdev1", 00:10:54.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.837 "is_configured": false, 00:10:54.837 "data_offset": 0, 00:10:54.837 "data_size": 0 00:10:54.837 }, 00:10:54.837 { 00:10:54.837 "name": "BaseBdev2", 00:10:54.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.837 "is_configured": false, 00:10:54.837 "data_offset": 0, 00:10:54.837 "data_size": 0 00:10:54.837 }, 00:10:54.837 { 00:10:54.837 "name": "BaseBdev3", 00:10:54.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.837 "is_configured": false, 00:10:54.837 "data_offset": 0, 00:10:54.837 "data_size": 0 00:10:54.837 }, 00:10:54.837 { 00:10:54.837 "name": "BaseBdev4", 00:10:54.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.837 "is_configured": false, 00:10:54.837 "data_offset": 0, 00:10:54.837 "data_size": 0 00:10:54.837 } 00:10:54.837 ] 00:10:54.837 }' 00:10:54.837 12:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.837 12:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.406 [2024-11-19 12:31:00.376068] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:55.406 [2024-11-19 12:31:00.376181] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.406 [2024-11-19 12:31:00.384094] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:55.406 [2024-11-19 12:31:00.384191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:55.406 [2024-11-19 12:31:00.384220] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:55.406 [2024-11-19 12:31:00.384244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:55.406 [2024-11-19 12:31:00.384262] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:55.406 [2024-11-19 12:31:00.384284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:55.406 [2024-11-19 12:31:00.384302] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:55.406 [2024-11-19 12:31:00.384325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.406 [2024-11-19 12:31:00.405121] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.406 BaseBdev1 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.406 [ 00:10:55.406 { 00:10:55.406 "name": "BaseBdev1", 00:10:55.406 "aliases": [ 00:10:55.406 "9e57845f-3761-4629-99ba-297ff551ce37" 00:10:55.406 ], 00:10:55.406 "product_name": "Malloc disk", 00:10:55.406 "block_size": 512, 00:10:55.406 "num_blocks": 65536, 00:10:55.406 "uuid": "9e57845f-3761-4629-99ba-297ff551ce37", 00:10:55.406 "assigned_rate_limits": { 00:10:55.406 "rw_ios_per_sec": 0, 00:10:55.406 "rw_mbytes_per_sec": 0, 00:10:55.406 "r_mbytes_per_sec": 0, 00:10:55.406 "w_mbytes_per_sec": 0 00:10:55.406 }, 00:10:55.406 "claimed": true, 00:10:55.406 "claim_type": "exclusive_write", 00:10:55.406 "zoned": false, 00:10:55.406 "supported_io_types": { 00:10:55.406 "read": true, 00:10:55.406 "write": true, 00:10:55.406 "unmap": true, 00:10:55.406 "flush": true, 00:10:55.406 "reset": true, 00:10:55.406 "nvme_admin": false, 00:10:55.406 "nvme_io": false, 00:10:55.406 "nvme_io_md": false, 00:10:55.406 "write_zeroes": true, 00:10:55.406 "zcopy": true, 00:10:55.406 "get_zone_info": false, 00:10:55.406 "zone_management": false, 00:10:55.406 "zone_append": false, 00:10:55.406 "compare": false, 00:10:55.406 "compare_and_write": false, 00:10:55.406 "abort": true, 00:10:55.406 "seek_hole": false, 00:10:55.406 "seek_data": false, 00:10:55.406 "copy": true, 00:10:55.406 "nvme_iov_md": false 00:10:55.406 }, 00:10:55.406 "memory_domains": [ 00:10:55.406 { 00:10:55.406 "dma_device_id": "system", 00:10:55.406 "dma_device_type": 1 00:10:55.406 }, 00:10:55.406 { 00:10:55.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.406 "dma_device_type": 2 00:10:55.406 } 00:10:55.406 ], 00:10:55.406 "driver_specific": {} 00:10:55.406 } 00:10:55.406 ] 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.406 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.406 "name": "Existed_Raid", 00:10:55.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.406 "strip_size_kb": 0, 00:10:55.406 "state": "configuring", 00:10:55.406 "raid_level": "raid1", 00:10:55.406 "superblock": false, 00:10:55.406 "num_base_bdevs": 4, 00:10:55.406 "num_base_bdevs_discovered": 1, 00:10:55.406 "num_base_bdevs_operational": 4, 00:10:55.406 "base_bdevs_list": [ 00:10:55.406 { 00:10:55.407 "name": "BaseBdev1", 00:10:55.407 "uuid": "9e57845f-3761-4629-99ba-297ff551ce37", 00:10:55.407 "is_configured": true, 00:10:55.407 "data_offset": 0, 00:10:55.407 "data_size": 65536 00:10:55.407 }, 00:10:55.407 { 00:10:55.407 "name": "BaseBdev2", 00:10:55.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.407 "is_configured": false, 00:10:55.407 "data_offset": 0, 00:10:55.407 "data_size": 0 00:10:55.407 }, 00:10:55.407 { 00:10:55.407 "name": "BaseBdev3", 00:10:55.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.407 "is_configured": false, 00:10:55.407 "data_offset": 0, 00:10:55.407 "data_size": 0 00:10:55.407 }, 00:10:55.407 { 00:10:55.407 "name": "BaseBdev4", 00:10:55.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.407 "is_configured": false, 00:10:55.407 "data_offset": 0, 00:10:55.407 "data_size": 0 00:10:55.407 } 00:10:55.407 ] 00:10:55.407 }' 00:10:55.407 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.407 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.666 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:55.666 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.666 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.666 [2024-11-19 12:31:00.872411] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:55.666 [2024-11-19 12:31:00.872473] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:55.666 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.666 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:55.666 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.666 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.666 [2024-11-19 12:31:00.884444] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.666 [2024-11-19 12:31:00.886263] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:55.666 [2024-11-19 12:31:00.886349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:55.666 [2024-11-19 12:31:00.886363] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:55.666 [2024-11-19 12:31:00.886372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:55.666 [2024-11-19 12:31:00.886378] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:55.666 [2024-11-19 12:31:00.886386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:55.666 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.666 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:55.666 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:55.666 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:55.666 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.666 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.666 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.666 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.666 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.666 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.666 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.666 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.666 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.666 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.666 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.666 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.666 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.666 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.925 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.925 "name": "Existed_Raid", 00:10:55.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.925 "strip_size_kb": 0, 00:10:55.925 "state": "configuring", 00:10:55.925 "raid_level": "raid1", 00:10:55.925 "superblock": false, 00:10:55.925 "num_base_bdevs": 4, 00:10:55.925 "num_base_bdevs_discovered": 1, 00:10:55.925 "num_base_bdevs_operational": 4, 00:10:55.925 "base_bdevs_list": [ 00:10:55.925 { 00:10:55.925 "name": "BaseBdev1", 00:10:55.925 "uuid": "9e57845f-3761-4629-99ba-297ff551ce37", 00:10:55.925 "is_configured": true, 00:10:55.925 "data_offset": 0, 00:10:55.925 "data_size": 65536 00:10:55.925 }, 00:10:55.925 { 00:10:55.925 "name": "BaseBdev2", 00:10:55.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.925 "is_configured": false, 00:10:55.925 "data_offset": 0, 00:10:55.925 "data_size": 0 00:10:55.925 }, 00:10:55.925 { 00:10:55.925 "name": "BaseBdev3", 00:10:55.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.925 "is_configured": false, 00:10:55.925 "data_offset": 0, 00:10:55.925 "data_size": 0 00:10:55.925 }, 00:10:55.925 { 00:10:55.925 "name": "BaseBdev4", 00:10:55.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.925 "is_configured": false, 00:10:55.925 "data_offset": 0, 00:10:55.925 "data_size": 0 00:10:55.925 } 00:10:55.925 ] 00:10:55.925 }' 00:10:55.925 12:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.925 12:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.185 [2024-11-19 12:31:01.347116] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:56.185 BaseBdev2 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.185 [ 00:10:56.185 { 00:10:56.185 "name": "BaseBdev2", 00:10:56.185 "aliases": [ 00:10:56.185 "50d2d7c9-fafe-4a3e-b895-46b71a27934b" 00:10:56.185 ], 00:10:56.185 "product_name": "Malloc disk", 00:10:56.185 "block_size": 512, 00:10:56.185 "num_blocks": 65536, 00:10:56.185 "uuid": "50d2d7c9-fafe-4a3e-b895-46b71a27934b", 00:10:56.185 "assigned_rate_limits": { 00:10:56.185 "rw_ios_per_sec": 0, 00:10:56.185 "rw_mbytes_per_sec": 0, 00:10:56.185 "r_mbytes_per_sec": 0, 00:10:56.185 "w_mbytes_per_sec": 0 00:10:56.185 }, 00:10:56.185 "claimed": true, 00:10:56.185 "claim_type": "exclusive_write", 00:10:56.185 "zoned": false, 00:10:56.185 "supported_io_types": { 00:10:56.185 "read": true, 00:10:56.185 "write": true, 00:10:56.185 "unmap": true, 00:10:56.185 "flush": true, 00:10:56.185 "reset": true, 00:10:56.185 "nvme_admin": false, 00:10:56.185 "nvme_io": false, 00:10:56.185 "nvme_io_md": false, 00:10:56.185 "write_zeroes": true, 00:10:56.185 "zcopy": true, 00:10:56.185 "get_zone_info": false, 00:10:56.185 "zone_management": false, 00:10:56.185 "zone_append": false, 00:10:56.185 "compare": false, 00:10:56.185 "compare_and_write": false, 00:10:56.185 "abort": true, 00:10:56.185 "seek_hole": false, 00:10:56.185 "seek_data": false, 00:10:56.185 "copy": true, 00:10:56.185 "nvme_iov_md": false 00:10:56.185 }, 00:10:56.185 "memory_domains": [ 00:10:56.185 { 00:10:56.185 "dma_device_id": "system", 00:10:56.185 "dma_device_type": 1 00:10:56.185 }, 00:10:56.185 { 00:10:56.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.185 "dma_device_type": 2 00:10:56.185 } 00:10:56.185 ], 00:10:56.185 "driver_specific": {} 00:10:56.185 } 00:10:56.185 ] 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.185 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.185 "name": "Existed_Raid", 00:10:56.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.185 "strip_size_kb": 0, 00:10:56.185 "state": "configuring", 00:10:56.185 "raid_level": "raid1", 00:10:56.185 "superblock": false, 00:10:56.185 "num_base_bdevs": 4, 00:10:56.185 "num_base_bdevs_discovered": 2, 00:10:56.185 "num_base_bdevs_operational": 4, 00:10:56.185 "base_bdevs_list": [ 00:10:56.185 { 00:10:56.185 "name": "BaseBdev1", 00:10:56.185 "uuid": "9e57845f-3761-4629-99ba-297ff551ce37", 00:10:56.185 "is_configured": true, 00:10:56.185 "data_offset": 0, 00:10:56.185 "data_size": 65536 00:10:56.185 }, 00:10:56.185 { 00:10:56.185 "name": "BaseBdev2", 00:10:56.185 "uuid": "50d2d7c9-fafe-4a3e-b895-46b71a27934b", 00:10:56.185 "is_configured": true, 00:10:56.185 "data_offset": 0, 00:10:56.186 "data_size": 65536 00:10:56.186 }, 00:10:56.186 { 00:10:56.186 "name": "BaseBdev3", 00:10:56.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.186 "is_configured": false, 00:10:56.186 "data_offset": 0, 00:10:56.186 "data_size": 0 00:10:56.186 }, 00:10:56.186 { 00:10:56.186 "name": "BaseBdev4", 00:10:56.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.186 "is_configured": false, 00:10:56.186 "data_offset": 0, 00:10:56.186 "data_size": 0 00:10:56.186 } 00:10:56.186 ] 00:10:56.186 }' 00:10:56.186 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.186 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.759 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:56.759 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.759 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.759 BaseBdev3 00:10:56.759 [2024-11-19 12:31:01.841690] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:56.759 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.759 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:56.759 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:56.759 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:56.759 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:56.759 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:56.759 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:56.759 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:56.759 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.759 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.759 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.759 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:56.759 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.759 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.759 [ 00:10:56.759 { 00:10:56.759 "name": "BaseBdev3", 00:10:56.759 "aliases": [ 00:10:56.759 "041c2b65-7a4d-435e-a08e-4c3ff7b26de8" 00:10:56.759 ], 00:10:56.759 "product_name": "Malloc disk", 00:10:56.759 "block_size": 512, 00:10:56.759 "num_blocks": 65536, 00:10:56.759 "uuid": "041c2b65-7a4d-435e-a08e-4c3ff7b26de8", 00:10:56.759 "assigned_rate_limits": { 00:10:56.759 "rw_ios_per_sec": 0, 00:10:56.759 "rw_mbytes_per_sec": 0, 00:10:56.759 "r_mbytes_per_sec": 0, 00:10:56.759 "w_mbytes_per_sec": 0 00:10:56.759 }, 00:10:56.759 "claimed": true, 00:10:56.759 "claim_type": "exclusive_write", 00:10:56.759 "zoned": false, 00:10:56.759 "supported_io_types": { 00:10:56.759 "read": true, 00:10:56.759 "write": true, 00:10:56.759 "unmap": true, 00:10:56.759 "flush": true, 00:10:56.759 "reset": true, 00:10:56.759 "nvme_admin": false, 00:10:56.759 "nvme_io": false, 00:10:56.759 "nvme_io_md": false, 00:10:56.759 "write_zeroes": true, 00:10:56.759 "zcopy": true, 00:10:56.759 "get_zone_info": false, 00:10:56.759 "zone_management": false, 00:10:56.759 "zone_append": false, 00:10:56.759 "compare": false, 00:10:56.759 "compare_and_write": false, 00:10:56.759 "abort": true, 00:10:56.759 "seek_hole": false, 00:10:56.759 "seek_data": false, 00:10:56.759 "copy": true, 00:10:56.759 "nvme_iov_md": false 00:10:56.759 }, 00:10:56.759 "memory_domains": [ 00:10:56.759 { 00:10:56.759 "dma_device_id": "system", 00:10:56.759 "dma_device_type": 1 00:10:56.759 }, 00:10:56.759 { 00:10:56.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.759 "dma_device_type": 2 00:10:56.759 } 00:10:56.759 ], 00:10:56.759 "driver_specific": {} 00:10:56.759 } 00:10:56.759 ] 00:10:56.759 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.759 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:56.759 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:56.759 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:56.759 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:56.759 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.759 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.759 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.759 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.759 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.759 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.759 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.759 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.759 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.759 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.760 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.760 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.760 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.760 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.760 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.760 "name": "Existed_Raid", 00:10:56.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.760 "strip_size_kb": 0, 00:10:56.760 "state": "configuring", 00:10:56.760 "raid_level": "raid1", 00:10:56.760 "superblock": false, 00:10:56.760 "num_base_bdevs": 4, 00:10:56.760 "num_base_bdevs_discovered": 3, 00:10:56.760 "num_base_bdevs_operational": 4, 00:10:56.760 "base_bdevs_list": [ 00:10:56.760 { 00:10:56.760 "name": "BaseBdev1", 00:10:56.760 "uuid": "9e57845f-3761-4629-99ba-297ff551ce37", 00:10:56.760 "is_configured": true, 00:10:56.760 "data_offset": 0, 00:10:56.760 "data_size": 65536 00:10:56.760 }, 00:10:56.760 { 00:10:56.760 "name": "BaseBdev2", 00:10:56.760 "uuid": "50d2d7c9-fafe-4a3e-b895-46b71a27934b", 00:10:56.760 "is_configured": true, 00:10:56.760 "data_offset": 0, 00:10:56.760 "data_size": 65536 00:10:56.760 }, 00:10:56.760 { 00:10:56.760 "name": "BaseBdev3", 00:10:56.760 "uuid": "041c2b65-7a4d-435e-a08e-4c3ff7b26de8", 00:10:56.760 "is_configured": true, 00:10:56.760 "data_offset": 0, 00:10:56.760 "data_size": 65536 00:10:56.760 }, 00:10:56.760 { 00:10:56.760 "name": "BaseBdev4", 00:10:56.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.760 "is_configured": false, 00:10:56.760 "data_offset": 0, 00:10:56.760 "data_size": 0 00:10:56.760 } 00:10:56.760 ] 00:10:56.760 }' 00:10:56.760 12:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.760 12:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.019 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:57.019 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.019 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.279 BaseBdev4 00:10:57.279 [2024-11-19 12:31:02.284339] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:57.279 [2024-11-19 12:31:02.284399] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:57.279 [2024-11-19 12:31:02.284409] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:57.279 [2024-11-19 12:31:02.284697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:57.279 [2024-11-19 12:31:02.284855] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:57.279 [2024-11-19 12:31:02.284869] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:57.279 [2024-11-19 12:31:02.285058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.279 [ 00:10:57.279 { 00:10:57.279 "name": "BaseBdev4", 00:10:57.279 "aliases": [ 00:10:57.279 "69e7bb74-1779-4b22-be4b-26edbb56a543" 00:10:57.279 ], 00:10:57.279 "product_name": "Malloc disk", 00:10:57.279 "block_size": 512, 00:10:57.279 "num_blocks": 65536, 00:10:57.279 "uuid": "69e7bb74-1779-4b22-be4b-26edbb56a543", 00:10:57.279 "assigned_rate_limits": { 00:10:57.279 "rw_ios_per_sec": 0, 00:10:57.279 "rw_mbytes_per_sec": 0, 00:10:57.279 "r_mbytes_per_sec": 0, 00:10:57.279 "w_mbytes_per_sec": 0 00:10:57.279 }, 00:10:57.279 "claimed": true, 00:10:57.279 "claim_type": "exclusive_write", 00:10:57.279 "zoned": false, 00:10:57.279 "supported_io_types": { 00:10:57.279 "read": true, 00:10:57.279 "write": true, 00:10:57.279 "unmap": true, 00:10:57.279 "flush": true, 00:10:57.279 "reset": true, 00:10:57.279 "nvme_admin": false, 00:10:57.279 "nvme_io": false, 00:10:57.279 "nvme_io_md": false, 00:10:57.279 "write_zeroes": true, 00:10:57.279 "zcopy": true, 00:10:57.279 "get_zone_info": false, 00:10:57.279 "zone_management": false, 00:10:57.279 "zone_append": false, 00:10:57.279 "compare": false, 00:10:57.279 "compare_and_write": false, 00:10:57.279 "abort": true, 00:10:57.279 "seek_hole": false, 00:10:57.279 "seek_data": false, 00:10:57.279 "copy": true, 00:10:57.279 "nvme_iov_md": false 00:10:57.279 }, 00:10:57.279 "memory_domains": [ 00:10:57.279 { 00:10:57.279 "dma_device_id": "system", 00:10:57.279 "dma_device_type": 1 00:10:57.279 }, 00:10:57.279 { 00:10:57.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.279 "dma_device_type": 2 00:10:57.279 } 00:10:57.279 ], 00:10:57.279 "driver_specific": {} 00:10:57.279 } 00:10:57.279 ] 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.279 "name": "Existed_Raid", 00:10:57.279 "uuid": "c06ad6d7-8026-4098-a200-0b4594337d3a", 00:10:57.279 "strip_size_kb": 0, 00:10:57.279 "state": "online", 00:10:57.279 "raid_level": "raid1", 00:10:57.279 "superblock": false, 00:10:57.279 "num_base_bdevs": 4, 00:10:57.279 "num_base_bdevs_discovered": 4, 00:10:57.279 "num_base_bdevs_operational": 4, 00:10:57.279 "base_bdevs_list": [ 00:10:57.279 { 00:10:57.279 "name": "BaseBdev1", 00:10:57.279 "uuid": "9e57845f-3761-4629-99ba-297ff551ce37", 00:10:57.279 "is_configured": true, 00:10:57.279 "data_offset": 0, 00:10:57.279 "data_size": 65536 00:10:57.279 }, 00:10:57.279 { 00:10:57.279 "name": "BaseBdev2", 00:10:57.279 "uuid": "50d2d7c9-fafe-4a3e-b895-46b71a27934b", 00:10:57.279 "is_configured": true, 00:10:57.279 "data_offset": 0, 00:10:57.279 "data_size": 65536 00:10:57.279 }, 00:10:57.279 { 00:10:57.279 "name": "BaseBdev3", 00:10:57.279 "uuid": "041c2b65-7a4d-435e-a08e-4c3ff7b26de8", 00:10:57.279 "is_configured": true, 00:10:57.279 "data_offset": 0, 00:10:57.279 "data_size": 65536 00:10:57.279 }, 00:10:57.279 { 00:10:57.279 "name": "BaseBdev4", 00:10:57.279 "uuid": "69e7bb74-1779-4b22-be4b-26edbb56a543", 00:10:57.279 "is_configured": true, 00:10:57.279 "data_offset": 0, 00:10:57.279 "data_size": 65536 00:10:57.279 } 00:10:57.279 ] 00:10:57.279 }' 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.279 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.539 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:57.539 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:57.540 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:57.540 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:57.540 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:57.540 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:57.540 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:57.540 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:57.540 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.540 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.540 [2024-11-19 12:31:02.752112] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:57.540 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.540 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:57.540 "name": "Existed_Raid", 00:10:57.540 "aliases": [ 00:10:57.540 "c06ad6d7-8026-4098-a200-0b4594337d3a" 00:10:57.540 ], 00:10:57.540 "product_name": "Raid Volume", 00:10:57.540 "block_size": 512, 00:10:57.540 "num_blocks": 65536, 00:10:57.540 "uuid": "c06ad6d7-8026-4098-a200-0b4594337d3a", 00:10:57.540 "assigned_rate_limits": { 00:10:57.540 "rw_ios_per_sec": 0, 00:10:57.540 "rw_mbytes_per_sec": 0, 00:10:57.540 "r_mbytes_per_sec": 0, 00:10:57.540 "w_mbytes_per_sec": 0 00:10:57.540 }, 00:10:57.540 "claimed": false, 00:10:57.540 "zoned": false, 00:10:57.540 "supported_io_types": { 00:10:57.540 "read": true, 00:10:57.540 "write": true, 00:10:57.540 "unmap": false, 00:10:57.540 "flush": false, 00:10:57.540 "reset": true, 00:10:57.540 "nvme_admin": false, 00:10:57.540 "nvme_io": false, 00:10:57.540 "nvme_io_md": false, 00:10:57.540 "write_zeroes": true, 00:10:57.540 "zcopy": false, 00:10:57.540 "get_zone_info": false, 00:10:57.540 "zone_management": false, 00:10:57.540 "zone_append": false, 00:10:57.540 "compare": false, 00:10:57.540 "compare_and_write": false, 00:10:57.540 "abort": false, 00:10:57.540 "seek_hole": false, 00:10:57.540 "seek_data": false, 00:10:57.540 "copy": false, 00:10:57.540 "nvme_iov_md": false 00:10:57.540 }, 00:10:57.540 "memory_domains": [ 00:10:57.540 { 00:10:57.540 "dma_device_id": "system", 00:10:57.540 "dma_device_type": 1 00:10:57.540 }, 00:10:57.540 { 00:10:57.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.540 "dma_device_type": 2 00:10:57.540 }, 00:10:57.540 { 00:10:57.540 "dma_device_id": "system", 00:10:57.540 "dma_device_type": 1 00:10:57.540 }, 00:10:57.540 { 00:10:57.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.540 "dma_device_type": 2 00:10:57.540 }, 00:10:57.540 { 00:10:57.540 "dma_device_id": "system", 00:10:57.540 "dma_device_type": 1 00:10:57.540 }, 00:10:57.540 { 00:10:57.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.540 "dma_device_type": 2 00:10:57.540 }, 00:10:57.540 { 00:10:57.540 "dma_device_id": "system", 00:10:57.540 "dma_device_type": 1 00:10:57.540 }, 00:10:57.540 { 00:10:57.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.540 "dma_device_type": 2 00:10:57.540 } 00:10:57.540 ], 00:10:57.540 "driver_specific": { 00:10:57.540 "raid": { 00:10:57.540 "uuid": "c06ad6d7-8026-4098-a200-0b4594337d3a", 00:10:57.540 "strip_size_kb": 0, 00:10:57.540 "state": "online", 00:10:57.540 "raid_level": "raid1", 00:10:57.540 "superblock": false, 00:10:57.540 "num_base_bdevs": 4, 00:10:57.540 "num_base_bdevs_discovered": 4, 00:10:57.540 "num_base_bdevs_operational": 4, 00:10:57.540 "base_bdevs_list": [ 00:10:57.540 { 00:10:57.540 "name": "BaseBdev1", 00:10:57.540 "uuid": "9e57845f-3761-4629-99ba-297ff551ce37", 00:10:57.540 "is_configured": true, 00:10:57.540 "data_offset": 0, 00:10:57.540 "data_size": 65536 00:10:57.540 }, 00:10:57.540 { 00:10:57.540 "name": "BaseBdev2", 00:10:57.540 "uuid": "50d2d7c9-fafe-4a3e-b895-46b71a27934b", 00:10:57.540 "is_configured": true, 00:10:57.540 "data_offset": 0, 00:10:57.540 "data_size": 65536 00:10:57.540 }, 00:10:57.540 { 00:10:57.540 "name": "BaseBdev3", 00:10:57.540 "uuid": "041c2b65-7a4d-435e-a08e-4c3ff7b26de8", 00:10:57.540 "is_configured": true, 00:10:57.540 "data_offset": 0, 00:10:57.540 "data_size": 65536 00:10:57.540 }, 00:10:57.540 { 00:10:57.540 "name": "BaseBdev4", 00:10:57.540 "uuid": "69e7bb74-1779-4b22-be4b-26edbb56a543", 00:10:57.540 "is_configured": true, 00:10:57.540 "data_offset": 0, 00:10:57.540 "data_size": 65536 00:10:57.540 } 00:10:57.540 ] 00:10:57.540 } 00:10:57.540 } 00:10:57.540 }' 00:10:57.540 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:57.800 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:57.800 BaseBdev2 00:10:57.800 BaseBdev3 00:10:57.800 BaseBdev4' 00:10:57.800 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.800 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:57.800 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.800 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:57.800 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.800 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.800 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.800 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.800 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.800 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.800 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.800 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:57.800 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.800 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.800 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.800 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.800 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.800 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.800 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.800 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:57.801 12:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.801 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.801 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.801 12:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.801 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.801 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.801 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.801 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:57.801 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.801 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.801 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.801 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.061 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.061 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.061 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:58.061 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.061 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.061 [2024-11-19 12:31:03.075212] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:58.061 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.061 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:58.061 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:58.061 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:58.061 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:58.061 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:58.061 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:58.061 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.061 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.061 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.061 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.061 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:58.061 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.061 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.061 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.061 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.061 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.061 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.061 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.061 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.061 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.061 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.061 "name": "Existed_Raid", 00:10:58.061 "uuid": "c06ad6d7-8026-4098-a200-0b4594337d3a", 00:10:58.061 "strip_size_kb": 0, 00:10:58.061 "state": "online", 00:10:58.061 "raid_level": "raid1", 00:10:58.061 "superblock": false, 00:10:58.061 "num_base_bdevs": 4, 00:10:58.061 "num_base_bdevs_discovered": 3, 00:10:58.061 "num_base_bdevs_operational": 3, 00:10:58.061 "base_bdevs_list": [ 00:10:58.061 { 00:10:58.061 "name": null, 00:10:58.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.061 "is_configured": false, 00:10:58.061 "data_offset": 0, 00:10:58.061 "data_size": 65536 00:10:58.061 }, 00:10:58.061 { 00:10:58.061 "name": "BaseBdev2", 00:10:58.061 "uuid": "50d2d7c9-fafe-4a3e-b895-46b71a27934b", 00:10:58.061 "is_configured": true, 00:10:58.061 "data_offset": 0, 00:10:58.061 "data_size": 65536 00:10:58.061 }, 00:10:58.061 { 00:10:58.061 "name": "BaseBdev3", 00:10:58.061 "uuid": "041c2b65-7a4d-435e-a08e-4c3ff7b26de8", 00:10:58.061 "is_configured": true, 00:10:58.061 "data_offset": 0, 00:10:58.061 "data_size": 65536 00:10:58.061 }, 00:10:58.061 { 00:10:58.061 "name": "BaseBdev4", 00:10:58.061 "uuid": "69e7bb74-1779-4b22-be4b-26edbb56a543", 00:10:58.061 "is_configured": true, 00:10:58.061 "data_offset": 0, 00:10:58.061 "data_size": 65536 00:10:58.061 } 00:10:58.061 ] 00:10:58.061 }' 00:10:58.061 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.061 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.321 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:58.321 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:58.321 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.321 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.321 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.321 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:58.321 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.321 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:58.321 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:58.321 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:58.321 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.321 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.321 [2024-11-19 12:31:03.541920] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:58.321 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.321 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:58.321 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:58.321 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.321 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.321 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:58.321 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.321 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.582 [2024-11-19 12:31:03.609199] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.582 [2024-11-19 12:31:03.676695] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:58.582 [2024-11-19 12:31:03.676890] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:58.582 [2024-11-19 12:31:03.688852] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:58.582 [2024-11-19 12:31:03.688902] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:58.582 [2024-11-19 12:31:03.688915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.582 BaseBdev2 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:58.582 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:58.583 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:58.583 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:58.583 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:58.583 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.583 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.583 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.583 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:58.583 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.583 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.583 [ 00:10:58.583 { 00:10:58.583 "name": "BaseBdev2", 00:10:58.583 "aliases": [ 00:10:58.583 "b0d239c8-eaa4-45da-9fb4-f3a24af86bba" 00:10:58.583 ], 00:10:58.583 "product_name": "Malloc disk", 00:10:58.583 "block_size": 512, 00:10:58.583 "num_blocks": 65536, 00:10:58.583 "uuid": "b0d239c8-eaa4-45da-9fb4-f3a24af86bba", 00:10:58.583 "assigned_rate_limits": { 00:10:58.583 "rw_ios_per_sec": 0, 00:10:58.583 "rw_mbytes_per_sec": 0, 00:10:58.583 "r_mbytes_per_sec": 0, 00:10:58.583 "w_mbytes_per_sec": 0 00:10:58.583 }, 00:10:58.583 "claimed": false, 00:10:58.583 "zoned": false, 00:10:58.583 "supported_io_types": { 00:10:58.583 "read": true, 00:10:58.583 "write": true, 00:10:58.583 "unmap": true, 00:10:58.583 "flush": true, 00:10:58.583 "reset": true, 00:10:58.583 "nvme_admin": false, 00:10:58.583 "nvme_io": false, 00:10:58.583 "nvme_io_md": false, 00:10:58.583 "write_zeroes": true, 00:10:58.583 "zcopy": true, 00:10:58.583 "get_zone_info": false, 00:10:58.583 "zone_management": false, 00:10:58.583 "zone_append": false, 00:10:58.583 "compare": false, 00:10:58.583 "compare_and_write": false, 00:10:58.583 "abort": true, 00:10:58.583 "seek_hole": false, 00:10:58.583 "seek_data": false, 00:10:58.583 "copy": true, 00:10:58.583 "nvme_iov_md": false 00:10:58.583 }, 00:10:58.583 "memory_domains": [ 00:10:58.583 { 00:10:58.583 "dma_device_id": "system", 00:10:58.583 "dma_device_type": 1 00:10:58.583 }, 00:10:58.583 { 00:10:58.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.583 "dma_device_type": 2 00:10:58.583 } 00:10:58.583 ], 00:10:58.583 "driver_specific": {} 00:10:58.583 } 00:10:58.583 ] 00:10:58.583 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.583 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:58.583 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:58.583 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:58.583 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:58.583 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.583 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.583 BaseBdev3 00:10:58.583 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.583 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:58.583 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:58.583 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:58.583 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:58.583 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:58.583 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:58.583 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:58.583 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.583 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.583 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.583 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:58.583 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.583 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.583 [ 00:10:58.583 { 00:10:58.583 "name": "BaseBdev3", 00:10:58.583 "aliases": [ 00:10:58.583 "1ddeb7e8-d574-4527-ac4b-f3237e784677" 00:10:58.583 ], 00:10:58.583 "product_name": "Malloc disk", 00:10:58.583 "block_size": 512, 00:10:58.583 "num_blocks": 65536, 00:10:58.583 "uuid": "1ddeb7e8-d574-4527-ac4b-f3237e784677", 00:10:58.583 "assigned_rate_limits": { 00:10:58.583 "rw_ios_per_sec": 0, 00:10:58.583 "rw_mbytes_per_sec": 0, 00:10:58.583 "r_mbytes_per_sec": 0, 00:10:58.583 "w_mbytes_per_sec": 0 00:10:58.583 }, 00:10:58.583 "claimed": false, 00:10:58.583 "zoned": false, 00:10:58.583 "supported_io_types": { 00:10:58.583 "read": true, 00:10:58.583 "write": true, 00:10:58.583 "unmap": true, 00:10:58.583 "flush": true, 00:10:58.583 "reset": true, 00:10:58.583 "nvme_admin": false, 00:10:58.583 "nvme_io": false, 00:10:58.583 "nvme_io_md": false, 00:10:58.583 "write_zeroes": true, 00:10:58.583 "zcopy": true, 00:10:58.583 "get_zone_info": false, 00:10:58.583 "zone_management": false, 00:10:58.583 "zone_append": false, 00:10:58.583 "compare": false, 00:10:58.583 "compare_and_write": false, 00:10:58.583 "abort": true, 00:10:58.583 "seek_hole": false, 00:10:58.583 "seek_data": false, 00:10:58.583 "copy": true, 00:10:58.583 "nvme_iov_md": false 00:10:58.583 }, 00:10:58.583 "memory_domains": [ 00:10:58.583 { 00:10:58.583 "dma_device_id": "system", 00:10:58.583 "dma_device_type": 1 00:10:58.583 }, 00:10:58.583 { 00:10:58.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.583 "dma_device_type": 2 00:10:58.844 } 00:10:58.844 ], 00:10:58.844 "driver_specific": {} 00:10:58.844 } 00:10:58.844 ] 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.844 BaseBdev4 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.844 [ 00:10:58.844 { 00:10:58.844 "name": "BaseBdev4", 00:10:58.844 "aliases": [ 00:10:58.844 "0540f1d5-19df-4d05-bfe3-26484d2a925c" 00:10:58.844 ], 00:10:58.844 "product_name": "Malloc disk", 00:10:58.844 "block_size": 512, 00:10:58.844 "num_blocks": 65536, 00:10:58.844 "uuid": "0540f1d5-19df-4d05-bfe3-26484d2a925c", 00:10:58.844 "assigned_rate_limits": { 00:10:58.844 "rw_ios_per_sec": 0, 00:10:58.844 "rw_mbytes_per_sec": 0, 00:10:58.844 "r_mbytes_per_sec": 0, 00:10:58.844 "w_mbytes_per_sec": 0 00:10:58.844 }, 00:10:58.844 "claimed": false, 00:10:58.844 "zoned": false, 00:10:58.844 "supported_io_types": { 00:10:58.844 "read": true, 00:10:58.844 "write": true, 00:10:58.844 "unmap": true, 00:10:58.844 "flush": true, 00:10:58.844 "reset": true, 00:10:58.844 "nvme_admin": false, 00:10:58.844 "nvme_io": false, 00:10:58.844 "nvme_io_md": false, 00:10:58.844 "write_zeroes": true, 00:10:58.844 "zcopy": true, 00:10:58.844 "get_zone_info": false, 00:10:58.844 "zone_management": false, 00:10:58.844 "zone_append": false, 00:10:58.844 "compare": false, 00:10:58.844 "compare_and_write": false, 00:10:58.844 "abort": true, 00:10:58.844 "seek_hole": false, 00:10:58.844 "seek_data": false, 00:10:58.844 "copy": true, 00:10:58.844 "nvme_iov_md": false 00:10:58.844 }, 00:10:58.844 "memory_domains": [ 00:10:58.844 { 00:10:58.844 "dma_device_id": "system", 00:10:58.844 "dma_device_type": 1 00:10:58.844 }, 00:10:58.844 { 00:10:58.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.844 "dma_device_type": 2 00:10:58.844 } 00:10:58.844 ], 00:10:58.844 "driver_specific": {} 00:10:58.844 } 00:10:58.844 ] 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.844 [2024-11-19 12:31:03.905912] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:58.844 [2024-11-19 12:31:03.906040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:58.844 [2024-11-19 12:31:03.906079] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:58.844 [2024-11-19 12:31:03.907948] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:58.844 [2024-11-19 12:31:03.908033] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.844 "name": "Existed_Raid", 00:10:58.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.844 "strip_size_kb": 0, 00:10:58.844 "state": "configuring", 00:10:58.844 "raid_level": "raid1", 00:10:58.844 "superblock": false, 00:10:58.844 "num_base_bdevs": 4, 00:10:58.844 "num_base_bdevs_discovered": 3, 00:10:58.844 "num_base_bdevs_operational": 4, 00:10:58.844 "base_bdevs_list": [ 00:10:58.844 { 00:10:58.844 "name": "BaseBdev1", 00:10:58.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.844 "is_configured": false, 00:10:58.844 "data_offset": 0, 00:10:58.844 "data_size": 0 00:10:58.844 }, 00:10:58.844 { 00:10:58.844 "name": "BaseBdev2", 00:10:58.844 "uuid": "b0d239c8-eaa4-45da-9fb4-f3a24af86bba", 00:10:58.844 "is_configured": true, 00:10:58.844 "data_offset": 0, 00:10:58.844 "data_size": 65536 00:10:58.844 }, 00:10:58.844 { 00:10:58.844 "name": "BaseBdev3", 00:10:58.844 "uuid": "1ddeb7e8-d574-4527-ac4b-f3237e784677", 00:10:58.844 "is_configured": true, 00:10:58.844 "data_offset": 0, 00:10:58.844 "data_size": 65536 00:10:58.844 }, 00:10:58.844 { 00:10:58.844 "name": "BaseBdev4", 00:10:58.844 "uuid": "0540f1d5-19df-4d05-bfe3-26484d2a925c", 00:10:58.844 "is_configured": true, 00:10:58.844 "data_offset": 0, 00:10:58.844 "data_size": 65536 00:10:58.844 } 00:10:58.844 ] 00:10:58.844 }' 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.844 12:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.105 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:59.105 12:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.105 12:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.105 [2024-11-19 12:31:04.297255] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:59.105 12:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.105 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:59.105 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.105 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.105 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:59.105 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:59.105 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.105 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.105 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.105 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.105 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.105 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.105 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.105 12:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.105 12:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.105 12:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.105 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.105 "name": "Existed_Raid", 00:10:59.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.105 "strip_size_kb": 0, 00:10:59.105 "state": "configuring", 00:10:59.105 "raid_level": "raid1", 00:10:59.105 "superblock": false, 00:10:59.105 "num_base_bdevs": 4, 00:10:59.105 "num_base_bdevs_discovered": 2, 00:10:59.105 "num_base_bdevs_operational": 4, 00:10:59.105 "base_bdevs_list": [ 00:10:59.105 { 00:10:59.105 "name": "BaseBdev1", 00:10:59.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.105 "is_configured": false, 00:10:59.105 "data_offset": 0, 00:10:59.105 "data_size": 0 00:10:59.105 }, 00:10:59.105 { 00:10:59.105 "name": null, 00:10:59.105 "uuid": "b0d239c8-eaa4-45da-9fb4-f3a24af86bba", 00:10:59.105 "is_configured": false, 00:10:59.105 "data_offset": 0, 00:10:59.105 "data_size": 65536 00:10:59.105 }, 00:10:59.105 { 00:10:59.105 "name": "BaseBdev3", 00:10:59.105 "uuid": "1ddeb7e8-d574-4527-ac4b-f3237e784677", 00:10:59.105 "is_configured": true, 00:10:59.105 "data_offset": 0, 00:10:59.105 "data_size": 65536 00:10:59.105 }, 00:10:59.105 { 00:10:59.105 "name": "BaseBdev4", 00:10:59.105 "uuid": "0540f1d5-19df-4d05-bfe3-26484d2a925c", 00:10:59.105 "is_configured": true, 00:10:59.105 "data_offset": 0, 00:10:59.105 "data_size": 65536 00:10:59.105 } 00:10:59.105 ] 00:10:59.105 }' 00:10:59.105 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.105 12:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.674 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.674 12:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.674 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:59.674 12:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.674 12:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.674 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:59.674 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:59.674 12:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.674 12:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.674 [2024-11-19 12:31:04.775711] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:59.674 BaseBdev1 00:10:59.674 12:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.674 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:59.674 12:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:59.674 12:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:59.674 12:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:59.674 12:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:59.674 12:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:59.674 12:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:59.674 12:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.674 12:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.674 12:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.674 12:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:59.674 12:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.674 12:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.674 [ 00:10:59.674 { 00:10:59.674 "name": "BaseBdev1", 00:10:59.674 "aliases": [ 00:10:59.674 "33c09221-7b98-40d6-ac1c-b77b8d596ea1" 00:10:59.674 ], 00:10:59.674 "product_name": "Malloc disk", 00:10:59.674 "block_size": 512, 00:10:59.674 "num_blocks": 65536, 00:10:59.674 "uuid": "33c09221-7b98-40d6-ac1c-b77b8d596ea1", 00:10:59.674 "assigned_rate_limits": { 00:10:59.674 "rw_ios_per_sec": 0, 00:10:59.674 "rw_mbytes_per_sec": 0, 00:10:59.674 "r_mbytes_per_sec": 0, 00:10:59.675 "w_mbytes_per_sec": 0 00:10:59.675 }, 00:10:59.675 "claimed": true, 00:10:59.675 "claim_type": "exclusive_write", 00:10:59.675 "zoned": false, 00:10:59.675 "supported_io_types": { 00:10:59.675 "read": true, 00:10:59.675 "write": true, 00:10:59.675 "unmap": true, 00:10:59.675 "flush": true, 00:10:59.675 "reset": true, 00:10:59.675 "nvme_admin": false, 00:10:59.675 "nvme_io": false, 00:10:59.675 "nvme_io_md": false, 00:10:59.675 "write_zeroes": true, 00:10:59.675 "zcopy": true, 00:10:59.675 "get_zone_info": false, 00:10:59.675 "zone_management": false, 00:10:59.675 "zone_append": false, 00:10:59.675 "compare": false, 00:10:59.675 "compare_and_write": false, 00:10:59.675 "abort": true, 00:10:59.675 "seek_hole": false, 00:10:59.675 "seek_data": false, 00:10:59.675 "copy": true, 00:10:59.675 "nvme_iov_md": false 00:10:59.675 }, 00:10:59.675 "memory_domains": [ 00:10:59.675 { 00:10:59.675 "dma_device_id": "system", 00:10:59.675 "dma_device_type": 1 00:10:59.675 }, 00:10:59.675 { 00:10:59.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.675 "dma_device_type": 2 00:10:59.675 } 00:10:59.675 ], 00:10:59.675 "driver_specific": {} 00:10:59.675 } 00:10:59.675 ] 00:10:59.675 12:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.675 12:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:59.675 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:59.675 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.675 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.675 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:59.675 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:59.675 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.675 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.675 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.675 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.675 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.675 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.675 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.675 12:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.675 12:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.675 12:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.675 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.675 "name": "Existed_Raid", 00:10:59.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.675 "strip_size_kb": 0, 00:10:59.675 "state": "configuring", 00:10:59.675 "raid_level": "raid1", 00:10:59.675 "superblock": false, 00:10:59.675 "num_base_bdevs": 4, 00:10:59.675 "num_base_bdevs_discovered": 3, 00:10:59.675 "num_base_bdevs_operational": 4, 00:10:59.675 "base_bdevs_list": [ 00:10:59.675 { 00:10:59.675 "name": "BaseBdev1", 00:10:59.675 "uuid": "33c09221-7b98-40d6-ac1c-b77b8d596ea1", 00:10:59.675 "is_configured": true, 00:10:59.675 "data_offset": 0, 00:10:59.675 "data_size": 65536 00:10:59.675 }, 00:10:59.675 { 00:10:59.675 "name": null, 00:10:59.675 "uuid": "b0d239c8-eaa4-45da-9fb4-f3a24af86bba", 00:10:59.675 "is_configured": false, 00:10:59.675 "data_offset": 0, 00:10:59.675 "data_size": 65536 00:10:59.675 }, 00:10:59.675 { 00:10:59.675 "name": "BaseBdev3", 00:10:59.675 "uuid": "1ddeb7e8-d574-4527-ac4b-f3237e784677", 00:10:59.675 "is_configured": true, 00:10:59.675 "data_offset": 0, 00:10:59.675 "data_size": 65536 00:10:59.675 }, 00:10:59.675 { 00:10:59.675 "name": "BaseBdev4", 00:10:59.675 "uuid": "0540f1d5-19df-4d05-bfe3-26484d2a925c", 00:10:59.675 "is_configured": true, 00:10:59.675 "data_offset": 0, 00:10:59.675 "data_size": 65536 00:10:59.675 } 00:10:59.675 ] 00:10:59.675 }' 00:10:59.675 12:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.675 12:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.245 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:00.245 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.245 12:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.245 12:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.245 12:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.245 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:00.245 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:00.245 12:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.245 12:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.245 [2024-11-19 12:31:05.342885] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:00.245 12:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.245 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:00.245 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.245 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.245 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:00.245 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:00.245 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.245 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.245 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.245 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.245 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.245 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.245 12:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.245 12:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.246 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.246 12:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.246 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.246 "name": "Existed_Raid", 00:11:00.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.246 "strip_size_kb": 0, 00:11:00.246 "state": "configuring", 00:11:00.246 "raid_level": "raid1", 00:11:00.246 "superblock": false, 00:11:00.246 "num_base_bdevs": 4, 00:11:00.246 "num_base_bdevs_discovered": 2, 00:11:00.246 "num_base_bdevs_operational": 4, 00:11:00.246 "base_bdevs_list": [ 00:11:00.246 { 00:11:00.246 "name": "BaseBdev1", 00:11:00.246 "uuid": "33c09221-7b98-40d6-ac1c-b77b8d596ea1", 00:11:00.246 "is_configured": true, 00:11:00.246 "data_offset": 0, 00:11:00.246 "data_size": 65536 00:11:00.246 }, 00:11:00.246 { 00:11:00.246 "name": null, 00:11:00.246 "uuid": "b0d239c8-eaa4-45da-9fb4-f3a24af86bba", 00:11:00.246 "is_configured": false, 00:11:00.246 "data_offset": 0, 00:11:00.246 "data_size": 65536 00:11:00.246 }, 00:11:00.246 { 00:11:00.246 "name": null, 00:11:00.246 "uuid": "1ddeb7e8-d574-4527-ac4b-f3237e784677", 00:11:00.246 "is_configured": false, 00:11:00.246 "data_offset": 0, 00:11:00.246 "data_size": 65536 00:11:00.246 }, 00:11:00.246 { 00:11:00.246 "name": "BaseBdev4", 00:11:00.246 "uuid": "0540f1d5-19df-4d05-bfe3-26484d2a925c", 00:11:00.246 "is_configured": true, 00:11:00.246 "data_offset": 0, 00:11:00.246 "data_size": 65536 00:11:00.246 } 00:11:00.246 ] 00:11:00.246 }' 00:11:00.246 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.246 12:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.816 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.816 12:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.816 12:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.816 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:00.816 12:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.816 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:00.816 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:00.816 12:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.816 12:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.816 [2024-11-19 12:31:05.818058] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:00.816 12:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.816 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:00.816 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.816 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.816 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:00.816 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:00.816 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.816 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.816 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.817 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.817 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.817 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.817 12:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.817 12:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.817 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.817 12:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.817 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.817 "name": "Existed_Raid", 00:11:00.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.817 "strip_size_kb": 0, 00:11:00.817 "state": "configuring", 00:11:00.817 "raid_level": "raid1", 00:11:00.817 "superblock": false, 00:11:00.817 "num_base_bdevs": 4, 00:11:00.817 "num_base_bdevs_discovered": 3, 00:11:00.817 "num_base_bdevs_operational": 4, 00:11:00.817 "base_bdevs_list": [ 00:11:00.817 { 00:11:00.817 "name": "BaseBdev1", 00:11:00.817 "uuid": "33c09221-7b98-40d6-ac1c-b77b8d596ea1", 00:11:00.817 "is_configured": true, 00:11:00.817 "data_offset": 0, 00:11:00.817 "data_size": 65536 00:11:00.817 }, 00:11:00.817 { 00:11:00.817 "name": null, 00:11:00.817 "uuid": "b0d239c8-eaa4-45da-9fb4-f3a24af86bba", 00:11:00.817 "is_configured": false, 00:11:00.817 "data_offset": 0, 00:11:00.817 "data_size": 65536 00:11:00.817 }, 00:11:00.817 { 00:11:00.817 "name": "BaseBdev3", 00:11:00.817 "uuid": "1ddeb7e8-d574-4527-ac4b-f3237e784677", 00:11:00.817 "is_configured": true, 00:11:00.817 "data_offset": 0, 00:11:00.817 "data_size": 65536 00:11:00.817 }, 00:11:00.817 { 00:11:00.817 "name": "BaseBdev4", 00:11:00.817 "uuid": "0540f1d5-19df-4d05-bfe3-26484d2a925c", 00:11:00.817 "is_configured": true, 00:11:00.817 "data_offset": 0, 00:11:00.817 "data_size": 65536 00:11:00.817 } 00:11:00.817 ] 00:11:00.817 }' 00:11:00.817 12:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.817 12:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.078 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:01.078 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.078 12:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.078 12:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.078 12:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.078 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:01.078 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:01.078 12:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.078 12:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.078 [2024-11-19 12:31:06.273305] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:01.078 12:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.078 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:01.078 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.078 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.078 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.078 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.078 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.078 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.078 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.078 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.078 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.078 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.078 12:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.078 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.078 12:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.078 12:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.078 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.078 "name": "Existed_Raid", 00:11:01.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.078 "strip_size_kb": 0, 00:11:01.078 "state": "configuring", 00:11:01.078 "raid_level": "raid1", 00:11:01.078 "superblock": false, 00:11:01.078 "num_base_bdevs": 4, 00:11:01.078 "num_base_bdevs_discovered": 2, 00:11:01.078 "num_base_bdevs_operational": 4, 00:11:01.078 "base_bdevs_list": [ 00:11:01.078 { 00:11:01.078 "name": null, 00:11:01.078 "uuid": "33c09221-7b98-40d6-ac1c-b77b8d596ea1", 00:11:01.078 "is_configured": false, 00:11:01.078 "data_offset": 0, 00:11:01.078 "data_size": 65536 00:11:01.078 }, 00:11:01.078 { 00:11:01.078 "name": null, 00:11:01.078 "uuid": "b0d239c8-eaa4-45da-9fb4-f3a24af86bba", 00:11:01.078 "is_configured": false, 00:11:01.078 "data_offset": 0, 00:11:01.078 "data_size": 65536 00:11:01.078 }, 00:11:01.078 { 00:11:01.078 "name": "BaseBdev3", 00:11:01.078 "uuid": "1ddeb7e8-d574-4527-ac4b-f3237e784677", 00:11:01.078 "is_configured": true, 00:11:01.078 "data_offset": 0, 00:11:01.078 "data_size": 65536 00:11:01.078 }, 00:11:01.078 { 00:11:01.078 "name": "BaseBdev4", 00:11:01.078 "uuid": "0540f1d5-19df-4d05-bfe3-26484d2a925c", 00:11:01.078 "is_configured": true, 00:11:01.078 "data_offset": 0, 00:11:01.078 "data_size": 65536 00:11:01.078 } 00:11:01.078 ] 00:11:01.078 }' 00:11:01.078 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.078 12:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.648 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:01.648 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.648 12:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.648 12:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.648 12:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.648 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:01.648 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:01.648 12:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.648 12:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.648 [2024-11-19 12:31:06.759154] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:01.648 12:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.648 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:01.648 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.648 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.648 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.648 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.648 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.648 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.648 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.648 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.648 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.648 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.648 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.648 12:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.648 12:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.648 12:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.648 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.648 "name": "Existed_Raid", 00:11:01.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.648 "strip_size_kb": 0, 00:11:01.648 "state": "configuring", 00:11:01.648 "raid_level": "raid1", 00:11:01.648 "superblock": false, 00:11:01.648 "num_base_bdevs": 4, 00:11:01.648 "num_base_bdevs_discovered": 3, 00:11:01.648 "num_base_bdevs_operational": 4, 00:11:01.648 "base_bdevs_list": [ 00:11:01.648 { 00:11:01.648 "name": null, 00:11:01.648 "uuid": "33c09221-7b98-40d6-ac1c-b77b8d596ea1", 00:11:01.648 "is_configured": false, 00:11:01.648 "data_offset": 0, 00:11:01.648 "data_size": 65536 00:11:01.648 }, 00:11:01.648 { 00:11:01.648 "name": "BaseBdev2", 00:11:01.648 "uuid": "b0d239c8-eaa4-45da-9fb4-f3a24af86bba", 00:11:01.648 "is_configured": true, 00:11:01.648 "data_offset": 0, 00:11:01.648 "data_size": 65536 00:11:01.648 }, 00:11:01.648 { 00:11:01.648 "name": "BaseBdev3", 00:11:01.648 "uuid": "1ddeb7e8-d574-4527-ac4b-f3237e784677", 00:11:01.648 "is_configured": true, 00:11:01.648 "data_offset": 0, 00:11:01.648 "data_size": 65536 00:11:01.648 }, 00:11:01.648 { 00:11:01.648 "name": "BaseBdev4", 00:11:01.648 "uuid": "0540f1d5-19df-4d05-bfe3-26484d2a925c", 00:11:01.648 "is_configured": true, 00:11:01.648 "data_offset": 0, 00:11:01.648 "data_size": 65536 00:11:01.648 } 00:11:01.648 ] 00:11:01.648 }' 00:11:01.648 12:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.648 12:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.225 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.225 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:02.225 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.225 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.225 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.225 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:02.225 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:02.225 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.225 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.225 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.225 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.225 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 33c09221-7b98-40d6-ac1c-b77b8d596ea1 00:11:02.225 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.225 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.225 [2024-11-19 12:31:07.309341] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:02.225 [2024-11-19 12:31:07.309395] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:11:02.225 [2024-11-19 12:31:07.309407] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:02.225 [2024-11-19 12:31:07.309658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:02.225 [2024-11-19 12:31:07.309811] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:11:02.225 [2024-11-19 12:31:07.309829] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:11:02.225 [2024-11-19 12:31:07.310006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:02.225 NewBaseBdev 00:11:02.225 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.225 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:02.225 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:02.225 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:02.225 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:02.225 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:02.225 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:02.225 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:02.225 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.225 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.225 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.225 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:02.225 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.225 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.225 [ 00:11:02.225 { 00:11:02.225 "name": "NewBaseBdev", 00:11:02.225 "aliases": [ 00:11:02.225 "33c09221-7b98-40d6-ac1c-b77b8d596ea1" 00:11:02.225 ], 00:11:02.225 "product_name": "Malloc disk", 00:11:02.225 "block_size": 512, 00:11:02.225 "num_blocks": 65536, 00:11:02.225 "uuid": "33c09221-7b98-40d6-ac1c-b77b8d596ea1", 00:11:02.225 "assigned_rate_limits": { 00:11:02.225 "rw_ios_per_sec": 0, 00:11:02.225 "rw_mbytes_per_sec": 0, 00:11:02.225 "r_mbytes_per_sec": 0, 00:11:02.225 "w_mbytes_per_sec": 0 00:11:02.225 }, 00:11:02.225 "claimed": true, 00:11:02.225 "claim_type": "exclusive_write", 00:11:02.225 "zoned": false, 00:11:02.225 "supported_io_types": { 00:11:02.225 "read": true, 00:11:02.225 "write": true, 00:11:02.225 "unmap": true, 00:11:02.225 "flush": true, 00:11:02.225 "reset": true, 00:11:02.225 "nvme_admin": false, 00:11:02.225 "nvme_io": false, 00:11:02.225 "nvme_io_md": false, 00:11:02.225 "write_zeroes": true, 00:11:02.225 "zcopy": true, 00:11:02.225 "get_zone_info": false, 00:11:02.225 "zone_management": false, 00:11:02.225 "zone_append": false, 00:11:02.225 "compare": false, 00:11:02.225 "compare_and_write": false, 00:11:02.225 "abort": true, 00:11:02.226 "seek_hole": false, 00:11:02.226 "seek_data": false, 00:11:02.226 "copy": true, 00:11:02.226 "nvme_iov_md": false 00:11:02.226 }, 00:11:02.226 "memory_domains": [ 00:11:02.226 { 00:11:02.226 "dma_device_id": "system", 00:11:02.226 "dma_device_type": 1 00:11:02.226 }, 00:11:02.226 { 00:11:02.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.226 "dma_device_type": 2 00:11:02.226 } 00:11:02.226 ], 00:11:02.226 "driver_specific": {} 00:11:02.226 } 00:11:02.226 ] 00:11:02.226 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.226 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:02.226 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:02.226 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.226 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:02.226 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:02.226 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:02.226 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.226 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.226 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.226 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.226 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.226 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.226 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.226 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.226 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.226 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.226 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.226 "name": "Existed_Raid", 00:11:02.226 "uuid": "446c5099-d425-4e45-b66a-e9358713596a", 00:11:02.226 "strip_size_kb": 0, 00:11:02.226 "state": "online", 00:11:02.226 "raid_level": "raid1", 00:11:02.226 "superblock": false, 00:11:02.226 "num_base_bdevs": 4, 00:11:02.226 "num_base_bdevs_discovered": 4, 00:11:02.226 "num_base_bdevs_operational": 4, 00:11:02.226 "base_bdevs_list": [ 00:11:02.226 { 00:11:02.226 "name": "NewBaseBdev", 00:11:02.226 "uuid": "33c09221-7b98-40d6-ac1c-b77b8d596ea1", 00:11:02.226 "is_configured": true, 00:11:02.226 "data_offset": 0, 00:11:02.226 "data_size": 65536 00:11:02.226 }, 00:11:02.226 { 00:11:02.226 "name": "BaseBdev2", 00:11:02.226 "uuid": "b0d239c8-eaa4-45da-9fb4-f3a24af86bba", 00:11:02.226 "is_configured": true, 00:11:02.226 "data_offset": 0, 00:11:02.226 "data_size": 65536 00:11:02.226 }, 00:11:02.226 { 00:11:02.226 "name": "BaseBdev3", 00:11:02.226 "uuid": "1ddeb7e8-d574-4527-ac4b-f3237e784677", 00:11:02.226 "is_configured": true, 00:11:02.226 "data_offset": 0, 00:11:02.226 "data_size": 65536 00:11:02.226 }, 00:11:02.226 { 00:11:02.226 "name": "BaseBdev4", 00:11:02.226 "uuid": "0540f1d5-19df-4d05-bfe3-26484d2a925c", 00:11:02.226 "is_configured": true, 00:11:02.226 "data_offset": 0, 00:11:02.226 "data_size": 65536 00:11:02.226 } 00:11:02.226 ] 00:11:02.226 }' 00:11:02.226 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.226 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.805 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:02.805 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:02.805 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:02.805 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:02.805 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:02.805 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:02.805 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:02.805 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.805 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.805 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:02.805 [2024-11-19 12:31:07.788879] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:02.805 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.805 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:02.805 "name": "Existed_Raid", 00:11:02.805 "aliases": [ 00:11:02.805 "446c5099-d425-4e45-b66a-e9358713596a" 00:11:02.805 ], 00:11:02.805 "product_name": "Raid Volume", 00:11:02.805 "block_size": 512, 00:11:02.805 "num_blocks": 65536, 00:11:02.805 "uuid": "446c5099-d425-4e45-b66a-e9358713596a", 00:11:02.805 "assigned_rate_limits": { 00:11:02.805 "rw_ios_per_sec": 0, 00:11:02.805 "rw_mbytes_per_sec": 0, 00:11:02.805 "r_mbytes_per_sec": 0, 00:11:02.805 "w_mbytes_per_sec": 0 00:11:02.805 }, 00:11:02.805 "claimed": false, 00:11:02.805 "zoned": false, 00:11:02.805 "supported_io_types": { 00:11:02.805 "read": true, 00:11:02.805 "write": true, 00:11:02.805 "unmap": false, 00:11:02.805 "flush": false, 00:11:02.805 "reset": true, 00:11:02.805 "nvme_admin": false, 00:11:02.805 "nvme_io": false, 00:11:02.805 "nvme_io_md": false, 00:11:02.805 "write_zeroes": true, 00:11:02.805 "zcopy": false, 00:11:02.805 "get_zone_info": false, 00:11:02.805 "zone_management": false, 00:11:02.805 "zone_append": false, 00:11:02.805 "compare": false, 00:11:02.805 "compare_and_write": false, 00:11:02.805 "abort": false, 00:11:02.805 "seek_hole": false, 00:11:02.805 "seek_data": false, 00:11:02.805 "copy": false, 00:11:02.805 "nvme_iov_md": false 00:11:02.805 }, 00:11:02.805 "memory_domains": [ 00:11:02.805 { 00:11:02.805 "dma_device_id": "system", 00:11:02.805 "dma_device_type": 1 00:11:02.805 }, 00:11:02.805 { 00:11:02.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.805 "dma_device_type": 2 00:11:02.805 }, 00:11:02.805 { 00:11:02.805 "dma_device_id": "system", 00:11:02.805 "dma_device_type": 1 00:11:02.805 }, 00:11:02.805 { 00:11:02.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.806 "dma_device_type": 2 00:11:02.806 }, 00:11:02.806 { 00:11:02.806 "dma_device_id": "system", 00:11:02.806 "dma_device_type": 1 00:11:02.806 }, 00:11:02.806 { 00:11:02.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.806 "dma_device_type": 2 00:11:02.806 }, 00:11:02.806 { 00:11:02.806 "dma_device_id": "system", 00:11:02.806 "dma_device_type": 1 00:11:02.806 }, 00:11:02.806 { 00:11:02.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.806 "dma_device_type": 2 00:11:02.806 } 00:11:02.806 ], 00:11:02.806 "driver_specific": { 00:11:02.806 "raid": { 00:11:02.806 "uuid": "446c5099-d425-4e45-b66a-e9358713596a", 00:11:02.806 "strip_size_kb": 0, 00:11:02.806 "state": "online", 00:11:02.806 "raid_level": "raid1", 00:11:02.806 "superblock": false, 00:11:02.806 "num_base_bdevs": 4, 00:11:02.806 "num_base_bdevs_discovered": 4, 00:11:02.806 "num_base_bdevs_operational": 4, 00:11:02.806 "base_bdevs_list": [ 00:11:02.806 { 00:11:02.806 "name": "NewBaseBdev", 00:11:02.806 "uuid": "33c09221-7b98-40d6-ac1c-b77b8d596ea1", 00:11:02.806 "is_configured": true, 00:11:02.806 "data_offset": 0, 00:11:02.806 "data_size": 65536 00:11:02.806 }, 00:11:02.806 { 00:11:02.806 "name": "BaseBdev2", 00:11:02.806 "uuid": "b0d239c8-eaa4-45da-9fb4-f3a24af86bba", 00:11:02.806 "is_configured": true, 00:11:02.806 "data_offset": 0, 00:11:02.806 "data_size": 65536 00:11:02.806 }, 00:11:02.806 { 00:11:02.806 "name": "BaseBdev3", 00:11:02.806 "uuid": "1ddeb7e8-d574-4527-ac4b-f3237e784677", 00:11:02.806 "is_configured": true, 00:11:02.806 "data_offset": 0, 00:11:02.806 "data_size": 65536 00:11:02.806 }, 00:11:02.806 { 00:11:02.806 "name": "BaseBdev4", 00:11:02.806 "uuid": "0540f1d5-19df-4d05-bfe3-26484d2a925c", 00:11:02.806 "is_configured": true, 00:11:02.806 "data_offset": 0, 00:11:02.806 "data_size": 65536 00:11:02.806 } 00:11:02.806 ] 00:11:02.806 } 00:11:02.806 } 00:11:02.806 }' 00:11:02.806 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:02.806 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:02.806 BaseBdev2 00:11:02.806 BaseBdev3 00:11:02.806 BaseBdev4' 00:11:02.806 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.806 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:02.806 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.806 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:02.806 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.806 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.806 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.806 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.806 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.806 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.806 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.806 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:02.806 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.806 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.806 12:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.806 12:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.806 12:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.806 12:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.806 12:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.806 12:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.806 12:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:02.806 12:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.806 12:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.806 12:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.806 12:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.806 12:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.806 12:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.806 12:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.806 12:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:02.806 12:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.806 12:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.066 12:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.066 12:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.066 12:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.066 12:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:03.066 12:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.066 12:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.066 [2024-11-19 12:31:08.104013] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:03.066 [2024-11-19 12:31:08.104044] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:03.066 [2024-11-19 12:31:08.104127] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:03.066 [2024-11-19 12:31:08.104372] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:03.066 [2024-11-19 12:31:08.104397] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:11:03.066 12:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.066 12:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 84165 00:11:03.066 12:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 84165 ']' 00:11:03.066 12:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 84165 00:11:03.066 12:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:03.066 12:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:03.066 12:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84165 00:11:03.066 12:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:03.066 12:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:03.066 killing process with pid 84165 00:11:03.066 12:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84165' 00:11:03.066 12:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 84165 00:11:03.067 [2024-11-19 12:31:08.153795] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:03.067 12:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 84165 00:11:03.067 [2024-11-19 12:31:08.194187] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:03.326 12:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:03.326 00:11:03.326 real 0m9.470s 00:11:03.326 user 0m16.059s 00:11:03.326 sys 0m2.063s 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.327 ************************************ 00:11:03.327 END TEST raid_state_function_test 00:11:03.327 ************************************ 00:11:03.327 12:31:08 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:03.327 12:31:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:03.327 12:31:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:03.327 12:31:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:03.327 ************************************ 00:11:03.327 START TEST raid_state_function_test_sb 00:11:03.327 ************************************ 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84814 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84814' 00:11:03.327 Process raid pid: 84814 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84814 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 84814 ']' 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:03.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:03.327 12:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.587 [2024-11-19 12:31:08.606957] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:03.587 [2024-11-19 12:31:08.607083] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.587 [2024-11-19 12:31:08.750933] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.587 [2024-11-19 12:31:08.797626] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.587 [2024-11-19 12:31:08.841546] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.587 [2024-11-19 12:31:08.841586] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.155 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:04.155 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:04.413 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:04.413 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.413 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.413 [2024-11-19 12:31:09.419932] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:04.413 [2024-11-19 12:31:09.419982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:04.413 [2024-11-19 12:31:09.419995] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:04.413 [2024-11-19 12:31:09.420004] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:04.413 [2024-11-19 12:31:09.420012] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:04.413 [2024-11-19 12:31:09.420024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:04.413 [2024-11-19 12:31:09.420030] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:04.413 [2024-11-19 12:31:09.420039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:04.413 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.413 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:04.413 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.413 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.413 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.413 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.413 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.413 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.413 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.413 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.413 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.413 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.413 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.413 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.413 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.413 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.413 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.413 "name": "Existed_Raid", 00:11:04.413 "uuid": "a193f39e-213c-404b-b674-6b21f4ed5ee7", 00:11:04.413 "strip_size_kb": 0, 00:11:04.413 "state": "configuring", 00:11:04.413 "raid_level": "raid1", 00:11:04.413 "superblock": true, 00:11:04.413 "num_base_bdevs": 4, 00:11:04.413 "num_base_bdevs_discovered": 0, 00:11:04.413 "num_base_bdevs_operational": 4, 00:11:04.413 "base_bdevs_list": [ 00:11:04.413 { 00:11:04.413 "name": "BaseBdev1", 00:11:04.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.413 "is_configured": false, 00:11:04.413 "data_offset": 0, 00:11:04.413 "data_size": 0 00:11:04.413 }, 00:11:04.413 { 00:11:04.413 "name": "BaseBdev2", 00:11:04.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.413 "is_configured": false, 00:11:04.413 "data_offset": 0, 00:11:04.413 "data_size": 0 00:11:04.413 }, 00:11:04.413 { 00:11:04.413 "name": "BaseBdev3", 00:11:04.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.413 "is_configured": false, 00:11:04.413 "data_offset": 0, 00:11:04.413 "data_size": 0 00:11:04.413 }, 00:11:04.413 { 00:11:04.413 "name": "BaseBdev4", 00:11:04.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.413 "is_configured": false, 00:11:04.413 "data_offset": 0, 00:11:04.413 "data_size": 0 00:11:04.413 } 00:11:04.413 ] 00:11:04.413 }' 00:11:04.413 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.413 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.672 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:04.672 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.672 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.672 [2024-11-19 12:31:09.863097] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:04.672 [2024-11-19 12:31:09.863218] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:11:04.672 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.672 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:04.672 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.672 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.672 [2024-11-19 12:31:09.875095] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:04.672 [2024-11-19 12:31:09.875139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:04.672 [2024-11-19 12:31:09.875149] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:04.672 [2024-11-19 12:31:09.875158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:04.672 [2024-11-19 12:31:09.875164] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:04.672 [2024-11-19 12:31:09.875173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:04.672 [2024-11-19 12:31:09.875180] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:04.672 [2024-11-19 12:31:09.875188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:04.672 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.672 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:04.672 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.672 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.672 [2024-11-19 12:31:09.896198] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:04.672 BaseBdev1 00:11:04.672 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.672 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:04.672 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:04.672 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:04.672 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:04.672 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:04.673 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:04.673 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:04.673 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.673 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.673 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.673 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:04.673 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.673 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.673 [ 00:11:04.673 { 00:11:04.673 "name": "BaseBdev1", 00:11:04.673 "aliases": [ 00:11:04.673 "54d1ad0e-5875-47fa-9604-353e0bb23722" 00:11:04.673 ], 00:11:04.673 "product_name": "Malloc disk", 00:11:04.673 "block_size": 512, 00:11:04.673 "num_blocks": 65536, 00:11:04.673 "uuid": "54d1ad0e-5875-47fa-9604-353e0bb23722", 00:11:04.673 "assigned_rate_limits": { 00:11:04.673 "rw_ios_per_sec": 0, 00:11:04.673 "rw_mbytes_per_sec": 0, 00:11:04.673 "r_mbytes_per_sec": 0, 00:11:04.673 "w_mbytes_per_sec": 0 00:11:04.673 }, 00:11:04.673 "claimed": true, 00:11:04.673 "claim_type": "exclusive_write", 00:11:04.673 "zoned": false, 00:11:04.673 "supported_io_types": { 00:11:04.673 "read": true, 00:11:04.673 "write": true, 00:11:04.673 "unmap": true, 00:11:04.673 "flush": true, 00:11:04.673 "reset": true, 00:11:04.673 "nvme_admin": false, 00:11:04.673 "nvme_io": false, 00:11:04.673 "nvme_io_md": false, 00:11:04.673 "write_zeroes": true, 00:11:04.673 "zcopy": true, 00:11:04.673 "get_zone_info": false, 00:11:04.673 "zone_management": false, 00:11:04.673 "zone_append": false, 00:11:04.673 "compare": false, 00:11:04.673 "compare_and_write": false, 00:11:04.673 "abort": true, 00:11:04.673 "seek_hole": false, 00:11:04.673 "seek_data": false, 00:11:04.673 "copy": true, 00:11:04.673 "nvme_iov_md": false 00:11:04.673 }, 00:11:04.673 "memory_domains": [ 00:11:04.673 { 00:11:04.673 "dma_device_id": "system", 00:11:04.932 "dma_device_type": 1 00:11:04.932 }, 00:11:04.932 { 00:11:04.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.932 "dma_device_type": 2 00:11:04.932 } 00:11:04.932 ], 00:11:04.932 "driver_specific": {} 00:11:04.932 } 00:11:04.932 ] 00:11:04.932 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.932 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:04.932 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:04.932 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.932 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.932 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.932 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.932 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.932 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.932 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.932 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.932 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.932 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.932 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.932 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.932 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.932 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.932 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.932 "name": "Existed_Raid", 00:11:04.932 "uuid": "ed995f39-1356-4c65-abfd-e2e1182fac19", 00:11:04.932 "strip_size_kb": 0, 00:11:04.932 "state": "configuring", 00:11:04.932 "raid_level": "raid1", 00:11:04.932 "superblock": true, 00:11:04.932 "num_base_bdevs": 4, 00:11:04.932 "num_base_bdevs_discovered": 1, 00:11:04.932 "num_base_bdevs_operational": 4, 00:11:04.932 "base_bdevs_list": [ 00:11:04.932 { 00:11:04.932 "name": "BaseBdev1", 00:11:04.932 "uuid": "54d1ad0e-5875-47fa-9604-353e0bb23722", 00:11:04.932 "is_configured": true, 00:11:04.932 "data_offset": 2048, 00:11:04.932 "data_size": 63488 00:11:04.932 }, 00:11:04.932 { 00:11:04.932 "name": "BaseBdev2", 00:11:04.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.932 "is_configured": false, 00:11:04.932 "data_offset": 0, 00:11:04.932 "data_size": 0 00:11:04.932 }, 00:11:04.932 { 00:11:04.932 "name": "BaseBdev3", 00:11:04.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.932 "is_configured": false, 00:11:04.932 "data_offset": 0, 00:11:04.932 "data_size": 0 00:11:04.932 }, 00:11:04.932 { 00:11:04.932 "name": "BaseBdev4", 00:11:04.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.932 "is_configured": false, 00:11:04.932 "data_offset": 0, 00:11:04.932 "data_size": 0 00:11:04.933 } 00:11:04.933 ] 00:11:04.933 }' 00:11:04.933 12:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.933 12:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.192 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:05.192 12:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.192 12:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.192 [2024-11-19 12:31:10.379480] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:05.192 [2024-11-19 12:31:10.379546] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:11:05.192 12:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.192 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:05.192 12:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.192 12:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.192 [2024-11-19 12:31:10.391501] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:05.192 [2024-11-19 12:31:10.393361] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:05.192 [2024-11-19 12:31:10.393441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:05.192 [2024-11-19 12:31:10.393470] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:05.192 [2024-11-19 12:31:10.393493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:05.192 [2024-11-19 12:31:10.393511] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:05.192 [2024-11-19 12:31:10.393530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:05.192 12:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.192 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:05.192 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:05.192 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:05.192 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.192 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.192 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.192 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.192 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.192 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.192 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.192 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.192 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.192 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.192 12:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.192 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.192 12:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.192 12:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.192 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.192 "name": "Existed_Raid", 00:11:05.192 "uuid": "375d6935-ab9b-4e51-bf24-7db42323c63f", 00:11:05.192 "strip_size_kb": 0, 00:11:05.192 "state": "configuring", 00:11:05.192 "raid_level": "raid1", 00:11:05.192 "superblock": true, 00:11:05.192 "num_base_bdevs": 4, 00:11:05.192 "num_base_bdevs_discovered": 1, 00:11:05.192 "num_base_bdevs_operational": 4, 00:11:05.192 "base_bdevs_list": [ 00:11:05.192 { 00:11:05.192 "name": "BaseBdev1", 00:11:05.192 "uuid": "54d1ad0e-5875-47fa-9604-353e0bb23722", 00:11:05.192 "is_configured": true, 00:11:05.192 "data_offset": 2048, 00:11:05.192 "data_size": 63488 00:11:05.192 }, 00:11:05.192 { 00:11:05.192 "name": "BaseBdev2", 00:11:05.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.192 "is_configured": false, 00:11:05.192 "data_offset": 0, 00:11:05.192 "data_size": 0 00:11:05.192 }, 00:11:05.192 { 00:11:05.192 "name": "BaseBdev3", 00:11:05.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.193 "is_configured": false, 00:11:05.193 "data_offset": 0, 00:11:05.193 "data_size": 0 00:11:05.193 }, 00:11:05.193 { 00:11:05.193 "name": "BaseBdev4", 00:11:05.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.193 "is_configured": false, 00:11:05.193 "data_offset": 0, 00:11:05.193 "data_size": 0 00:11:05.193 } 00:11:05.193 ] 00:11:05.193 }' 00:11:05.193 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.452 12:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.713 [2024-11-19 12:31:10.817286] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:05.713 BaseBdev2 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.713 [ 00:11:05.713 { 00:11:05.713 "name": "BaseBdev2", 00:11:05.713 "aliases": [ 00:11:05.713 "01960bf7-9ebf-4eab-9415-f3df3c314979" 00:11:05.713 ], 00:11:05.713 "product_name": "Malloc disk", 00:11:05.713 "block_size": 512, 00:11:05.713 "num_blocks": 65536, 00:11:05.713 "uuid": "01960bf7-9ebf-4eab-9415-f3df3c314979", 00:11:05.713 "assigned_rate_limits": { 00:11:05.713 "rw_ios_per_sec": 0, 00:11:05.713 "rw_mbytes_per_sec": 0, 00:11:05.713 "r_mbytes_per_sec": 0, 00:11:05.713 "w_mbytes_per_sec": 0 00:11:05.713 }, 00:11:05.713 "claimed": true, 00:11:05.713 "claim_type": "exclusive_write", 00:11:05.713 "zoned": false, 00:11:05.713 "supported_io_types": { 00:11:05.713 "read": true, 00:11:05.713 "write": true, 00:11:05.713 "unmap": true, 00:11:05.713 "flush": true, 00:11:05.713 "reset": true, 00:11:05.713 "nvme_admin": false, 00:11:05.713 "nvme_io": false, 00:11:05.713 "nvme_io_md": false, 00:11:05.713 "write_zeroes": true, 00:11:05.713 "zcopy": true, 00:11:05.713 "get_zone_info": false, 00:11:05.713 "zone_management": false, 00:11:05.713 "zone_append": false, 00:11:05.713 "compare": false, 00:11:05.713 "compare_and_write": false, 00:11:05.713 "abort": true, 00:11:05.713 "seek_hole": false, 00:11:05.713 "seek_data": false, 00:11:05.713 "copy": true, 00:11:05.713 "nvme_iov_md": false 00:11:05.713 }, 00:11:05.713 "memory_domains": [ 00:11:05.713 { 00:11:05.713 "dma_device_id": "system", 00:11:05.713 "dma_device_type": 1 00:11:05.713 }, 00:11:05.713 { 00:11:05.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.713 "dma_device_type": 2 00:11:05.713 } 00:11:05.713 ], 00:11:05.713 "driver_specific": {} 00:11:05.713 } 00:11:05.713 ] 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.713 "name": "Existed_Raid", 00:11:05.713 "uuid": "375d6935-ab9b-4e51-bf24-7db42323c63f", 00:11:05.713 "strip_size_kb": 0, 00:11:05.713 "state": "configuring", 00:11:05.713 "raid_level": "raid1", 00:11:05.713 "superblock": true, 00:11:05.713 "num_base_bdevs": 4, 00:11:05.713 "num_base_bdevs_discovered": 2, 00:11:05.713 "num_base_bdevs_operational": 4, 00:11:05.713 "base_bdevs_list": [ 00:11:05.713 { 00:11:05.713 "name": "BaseBdev1", 00:11:05.713 "uuid": "54d1ad0e-5875-47fa-9604-353e0bb23722", 00:11:05.713 "is_configured": true, 00:11:05.713 "data_offset": 2048, 00:11:05.713 "data_size": 63488 00:11:05.713 }, 00:11:05.713 { 00:11:05.713 "name": "BaseBdev2", 00:11:05.713 "uuid": "01960bf7-9ebf-4eab-9415-f3df3c314979", 00:11:05.713 "is_configured": true, 00:11:05.713 "data_offset": 2048, 00:11:05.713 "data_size": 63488 00:11:05.713 }, 00:11:05.713 { 00:11:05.713 "name": "BaseBdev3", 00:11:05.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.713 "is_configured": false, 00:11:05.713 "data_offset": 0, 00:11:05.713 "data_size": 0 00:11:05.713 }, 00:11:05.713 { 00:11:05.713 "name": "BaseBdev4", 00:11:05.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.713 "is_configured": false, 00:11:05.713 "data_offset": 0, 00:11:05.713 "data_size": 0 00:11:05.713 } 00:11:05.713 ] 00:11:05.713 }' 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.713 12:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.283 [2024-11-19 12:31:11.319944] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:06.283 BaseBdev3 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.283 [ 00:11:06.283 { 00:11:06.283 "name": "BaseBdev3", 00:11:06.283 "aliases": [ 00:11:06.283 "7f2be3d9-8e06-4c42-8135-3af72c5ca068" 00:11:06.283 ], 00:11:06.283 "product_name": "Malloc disk", 00:11:06.283 "block_size": 512, 00:11:06.283 "num_blocks": 65536, 00:11:06.283 "uuid": "7f2be3d9-8e06-4c42-8135-3af72c5ca068", 00:11:06.283 "assigned_rate_limits": { 00:11:06.283 "rw_ios_per_sec": 0, 00:11:06.283 "rw_mbytes_per_sec": 0, 00:11:06.283 "r_mbytes_per_sec": 0, 00:11:06.283 "w_mbytes_per_sec": 0 00:11:06.283 }, 00:11:06.283 "claimed": true, 00:11:06.283 "claim_type": "exclusive_write", 00:11:06.283 "zoned": false, 00:11:06.283 "supported_io_types": { 00:11:06.283 "read": true, 00:11:06.283 "write": true, 00:11:06.283 "unmap": true, 00:11:06.283 "flush": true, 00:11:06.283 "reset": true, 00:11:06.283 "nvme_admin": false, 00:11:06.283 "nvme_io": false, 00:11:06.283 "nvme_io_md": false, 00:11:06.283 "write_zeroes": true, 00:11:06.283 "zcopy": true, 00:11:06.283 "get_zone_info": false, 00:11:06.283 "zone_management": false, 00:11:06.283 "zone_append": false, 00:11:06.283 "compare": false, 00:11:06.283 "compare_and_write": false, 00:11:06.283 "abort": true, 00:11:06.283 "seek_hole": false, 00:11:06.283 "seek_data": false, 00:11:06.283 "copy": true, 00:11:06.283 "nvme_iov_md": false 00:11:06.283 }, 00:11:06.283 "memory_domains": [ 00:11:06.283 { 00:11:06.283 "dma_device_id": "system", 00:11:06.283 "dma_device_type": 1 00:11:06.283 }, 00:11:06.283 { 00:11:06.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.283 "dma_device_type": 2 00:11:06.283 } 00:11:06.283 ], 00:11:06.283 "driver_specific": {} 00:11:06.283 } 00:11:06.283 ] 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.283 "name": "Existed_Raid", 00:11:06.283 "uuid": "375d6935-ab9b-4e51-bf24-7db42323c63f", 00:11:06.283 "strip_size_kb": 0, 00:11:06.283 "state": "configuring", 00:11:06.283 "raid_level": "raid1", 00:11:06.283 "superblock": true, 00:11:06.283 "num_base_bdevs": 4, 00:11:06.283 "num_base_bdevs_discovered": 3, 00:11:06.283 "num_base_bdevs_operational": 4, 00:11:06.283 "base_bdevs_list": [ 00:11:06.283 { 00:11:06.283 "name": "BaseBdev1", 00:11:06.283 "uuid": "54d1ad0e-5875-47fa-9604-353e0bb23722", 00:11:06.283 "is_configured": true, 00:11:06.283 "data_offset": 2048, 00:11:06.283 "data_size": 63488 00:11:06.283 }, 00:11:06.283 { 00:11:06.283 "name": "BaseBdev2", 00:11:06.283 "uuid": "01960bf7-9ebf-4eab-9415-f3df3c314979", 00:11:06.283 "is_configured": true, 00:11:06.283 "data_offset": 2048, 00:11:06.283 "data_size": 63488 00:11:06.283 }, 00:11:06.283 { 00:11:06.283 "name": "BaseBdev3", 00:11:06.283 "uuid": "7f2be3d9-8e06-4c42-8135-3af72c5ca068", 00:11:06.283 "is_configured": true, 00:11:06.283 "data_offset": 2048, 00:11:06.283 "data_size": 63488 00:11:06.283 }, 00:11:06.283 { 00:11:06.283 "name": "BaseBdev4", 00:11:06.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.283 "is_configured": false, 00:11:06.283 "data_offset": 0, 00:11:06.283 "data_size": 0 00:11:06.283 } 00:11:06.283 ] 00:11:06.283 }' 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.283 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.544 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:06.544 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.544 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.544 [2024-11-19 12:31:11.782489] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:06.544 [2024-11-19 12:31:11.782721] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:11:06.544 [2024-11-19 12:31:11.782754] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:06.544 BaseBdev4 00:11:06.544 [2024-11-19 12:31:11.783066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:06.544 [2024-11-19 12:31:11.783219] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:11:06.544 [2024-11-19 12:31:11.783234] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:11:06.544 [2024-11-19 12:31:11.783358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.544 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.544 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:06.544 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:06.544 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:06.544 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:06.544 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:06.544 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:06.544 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:06.544 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.544 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.544 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.544 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:06.544 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.544 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.804 [ 00:11:06.804 { 00:11:06.804 "name": "BaseBdev4", 00:11:06.804 "aliases": [ 00:11:06.804 "2e2c2aec-b407-49de-9773-5795b9a058ff" 00:11:06.804 ], 00:11:06.804 "product_name": "Malloc disk", 00:11:06.804 "block_size": 512, 00:11:06.804 "num_blocks": 65536, 00:11:06.804 "uuid": "2e2c2aec-b407-49de-9773-5795b9a058ff", 00:11:06.804 "assigned_rate_limits": { 00:11:06.804 "rw_ios_per_sec": 0, 00:11:06.804 "rw_mbytes_per_sec": 0, 00:11:06.804 "r_mbytes_per_sec": 0, 00:11:06.804 "w_mbytes_per_sec": 0 00:11:06.804 }, 00:11:06.804 "claimed": true, 00:11:06.804 "claim_type": "exclusive_write", 00:11:06.804 "zoned": false, 00:11:06.804 "supported_io_types": { 00:11:06.804 "read": true, 00:11:06.804 "write": true, 00:11:06.804 "unmap": true, 00:11:06.804 "flush": true, 00:11:06.804 "reset": true, 00:11:06.804 "nvme_admin": false, 00:11:06.804 "nvme_io": false, 00:11:06.804 "nvme_io_md": false, 00:11:06.804 "write_zeroes": true, 00:11:06.804 "zcopy": true, 00:11:06.804 "get_zone_info": false, 00:11:06.804 "zone_management": false, 00:11:06.804 "zone_append": false, 00:11:06.804 "compare": false, 00:11:06.804 "compare_and_write": false, 00:11:06.804 "abort": true, 00:11:06.804 "seek_hole": false, 00:11:06.804 "seek_data": false, 00:11:06.804 "copy": true, 00:11:06.804 "nvme_iov_md": false 00:11:06.804 }, 00:11:06.804 "memory_domains": [ 00:11:06.804 { 00:11:06.804 "dma_device_id": "system", 00:11:06.804 "dma_device_type": 1 00:11:06.804 }, 00:11:06.804 { 00:11:06.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.804 "dma_device_type": 2 00:11:06.804 } 00:11:06.804 ], 00:11:06.804 "driver_specific": {} 00:11:06.804 } 00:11:06.804 ] 00:11:06.804 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.804 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:06.804 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:06.804 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:06.804 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:06.804 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.804 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.804 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.804 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.804 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.804 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.804 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.804 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.804 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.804 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.804 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.804 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.804 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.804 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.804 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.804 "name": "Existed_Raid", 00:11:06.804 "uuid": "375d6935-ab9b-4e51-bf24-7db42323c63f", 00:11:06.804 "strip_size_kb": 0, 00:11:06.804 "state": "online", 00:11:06.804 "raid_level": "raid1", 00:11:06.804 "superblock": true, 00:11:06.804 "num_base_bdevs": 4, 00:11:06.804 "num_base_bdevs_discovered": 4, 00:11:06.804 "num_base_bdevs_operational": 4, 00:11:06.804 "base_bdevs_list": [ 00:11:06.804 { 00:11:06.804 "name": "BaseBdev1", 00:11:06.804 "uuid": "54d1ad0e-5875-47fa-9604-353e0bb23722", 00:11:06.804 "is_configured": true, 00:11:06.804 "data_offset": 2048, 00:11:06.804 "data_size": 63488 00:11:06.804 }, 00:11:06.804 { 00:11:06.804 "name": "BaseBdev2", 00:11:06.804 "uuid": "01960bf7-9ebf-4eab-9415-f3df3c314979", 00:11:06.804 "is_configured": true, 00:11:06.804 "data_offset": 2048, 00:11:06.804 "data_size": 63488 00:11:06.804 }, 00:11:06.804 { 00:11:06.804 "name": "BaseBdev3", 00:11:06.804 "uuid": "7f2be3d9-8e06-4c42-8135-3af72c5ca068", 00:11:06.804 "is_configured": true, 00:11:06.804 "data_offset": 2048, 00:11:06.804 "data_size": 63488 00:11:06.804 }, 00:11:06.804 { 00:11:06.804 "name": "BaseBdev4", 00:11:06.804 "uuid": "2e2c2aec-b407-49de-9773-5795b9a058ff", 00:11:06.804 "is_configured": true, 00:11:06.804 "data_offset": 2048, 00:11:06.804 "data_size": 63488 00:11:06.804 } 00:11:06.804 ] 00:11:06.804 }' 00:11:06.804 12:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.805 12:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.064 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:07.064 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:07.064 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:07.064 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:07.064 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:07.064 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:07.064 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:07.064 12:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.064 12:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.064 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:07.064 [2024-11-19 12:31:12.214183] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.064 12:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.064 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:07.064 "name": "Existed_Raid", 00:11:07.064 "aliases": [ 00:11:07.064 "375d6935-ab9b-4e51-bf24-7db42323c63f" 00:11:07.064 ], 00:11:07.064 "product_name": "Raid Volume", 00:11:07.064 "block_size": 512, 00:11:07.064 "num_blocks": 63488, 00:11:07.064 "uuid": "375d6935-ab9b-4e51-bf24-7db42323c63f", 00:11:07.064 "assigned_rate_limits": { 00:11:07.065 "rw_ios_per_sec": 0, 00:11:07.065 "rw_mbytes_per_sec": 0, 00:11:07.065 "r_mbytes_per_sec": 0, 00:11:07.065 "w_mbytes_per_sec": 0 00:11:07.065 }, 00:11:07.065 "claimed": false, 00:11:07.065 "zoned": false, 00:11:07.065 "supported_io_types": { 00:11:07.065 "read": true, 00:11:07.065 "write": true, 00:11:07.065 "unmap": false, 00:11:07.065 "flush": false, 00:11:07.065 "reset": true, 00:11:07.065 "nvme_admin": false, 00:11:07.065 "nvme_io": false, 00:11:07.065 "nvme_io_md": false, 00:11:07.065 "write_zeroes": true, 00:11:07.065 "zcopy": false, 00:11:07.065 "get_zone_info": false, 00:11:07.065 "zone_management": false, 00:11:07.065 "zone_append": false, 00:11:07.065 "compare": false, 00:11:07.065 "compare_and_write": false, 00:11:07.065 "abort": false, 00:11:07.065 "seek_hole": false, 00:11:07.065 "seek_data": false, 00:11:07.065 "copy": false, 00:11:07.065 "nvme_iov_md": false 00:11:07.065 }, 00:11:07.065 "memory_domains": [ 00:11:07.065 { 00:11:07.065 "dma_device_id": "system", 00:11:07.065 "dma_device_type": 1 00:11:07.065 }, 00:11:07.065 { 00:11:07.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.065 "dma_device_type": 2 00:11:07.065 }, 00:11:07.065 { 00:11:07.065 "dma_device_id": "system", 00:11:07.065 "dma_device_type": 1 00:11:07.065 }, 00:11:07.065 { 00:11:07.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.065 "dma_device_type": 2 00:11:07.065 }, 00:11:07.065 { 00:11:07.065 "dma_device_id": "system", 00:11:07.065 "dma_device_type": 1 00:11:07.065 }, 00:11:07.065 { 00:11:07.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.065 "dma_device_type": 2 00:11:07.065 }, 00:11:07.065 { 00:11:07.065 "dma_device_id": "system", 00:11:07.065 "dma_device_type": 1 00:11:07.065 }, 00:11:07.065 { 00:11:07.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.065 "dma_device_type": 2 00:11:07.065 } 00:11:07.065 ], 00:11:07.065 "driver_specific": { 00:11:07.065 "raid": { 00:11:07.065 "uuid": "375d6935-ab9b-4e51-bf24-7db42323c63f", 00:11:07.065 "strip_size_kb": 0, 00:11:07.065 "state": "online", 00:11:07.065 "raid_level": "raid1", 00:11:07.065 "superblock": true, 00:11:07.065 "num_base_bdevs": 4, 00:11:07.065 "num_base_bdevs_discovered": 4, 00:11:07.065 "num_base_bdevs_operational": 4, 00:11:07.065 "base_bdevs_list": [ 00:11:07.065 { 00:11:07.065 "name": "BaseBdev1", 00:11:07.065 "uuid": "54d1ad0e-5875-47fa-9604-353e0bb23722", 00:11:07.065 "is_configured": true, 00:11:07.065 "data_offset": 2048, 00:11:07.065 "data_size": 63488 00:11:07.065 }, 00:11:07.065 { 00:11:07.065 "name": "BaseBdev2", 00:11:07.065 "uuid": "01960bf7-9ebf-4eab-9415-f3df3c314979", 00:11:07.065 "is_configured": true, 00:11:07.065 "data_offset": 2048, 00:11:07.065 "data_size": 63488 00:11:07.065 }, 00:11:07.065 { 00:11:07.065 "name": "BaseBdev3", 00:11:07.065 "uuid": "7f2be3d9-8e06-4c42-8135-3af72c5ca068", 00:11:07.065 "is_configured": true, 00:11:07.065 "data_offset": 2048, 00:11:07.065 "data_size": 63488 00:11:07.065 }, 00:11:07.065 { 00:11:07.065 "name": "BaseBdev4", 00:11:07.065 "uuid": "2e2c2aec-b407-49de-9773-5795b9a058ff", 00:11:07.065 "is_configured": true, 00:11:07.065 "data_offset": 2048, 00:11:07.065 "data_size": 63488 00:11:07.065 } 00:11:07.065 ] 00:11:07.065 } 00:11:07.065 } 00:11:07.065 }' 00:11:07.065 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:07.065 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:07.065 BaseBdev2 00:11:07.065 BaseBdev3 00:11:07.065 BaseBdev4' 00:11:07.065 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.326 [2024-11-19 12:31:12.525240] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.326 12:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.586 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.586 "name": "Existed_Raid", 00:11:07.586 "uuid": "375d6935-ab9b-4e51-bf24-7db42323c63f", 00:11:07.586 "strip_size_kb": 0, 00:11:07.586 "state": "online", 00:11:07.586 "raid_level": "raid1", 00:11:07.586 "superblock": true, 00:11:07.586 "num_base_bdevs": 4, 00:11:07.586 "num_base_bdevs_discovered": 3, 00:11:07.586 "num_base_bdevs_operational": 3, 00:11:07.586 "base_bdevs_list": [ 00:11:07.586 { 00:11:07.586 "name": null, 00:11:07.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.586 "is_configured": false, 00:11:07.586 "data_offset": 0, 00:11:07.586 "data_size": 63488 00:11:07.586 }, 00:11:07.586 { 00:11:07.586 "name": "BaseBdev2", 00:11:07.586 "uuid": "01960bf7-9ebf-4eab-9415-f3df3c314979", 00:11:07.586 "is_configured": true, 00:11:07.586 "data_offset": 2048, 00:11:07.586 "data_size": 63488 00:11:07.586 }, 00:11:07.586 { 00:11:07.586 "name": "BaseBdev3", 00:11:07.586 "uuid": "7f2be3d9-8e06-4c42-8135-3af72c5ca068", 00:11:07.586 "is_configured": true, 00:11:07.586 "data_offset": 2048, 00:11:07.586 "data_size": 63488 00:11:07.586 }, 00:11:07.586 { 00:11:07.586 "name": "BaseBdev4", 00:11:07.586 "uuid": "2e2c2aec-b407-49de-9773-5795b9a058ff", 00:11:07.586 "is_configured": true, 00:11:07.586 "data_offset": 2048, 00:11:07.586 "data_size": 63488 00:11:07.586 } 00:11:07.586 ] 00:11:07.586 }' 00:11:07.587 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.587 12:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.846 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:07.847 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:07.847 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.847 12:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:07.847 12:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.847 12:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.847 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.847 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:07.847 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:07.847 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:07.847 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.847 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.847 [2024-11-19 12:31:13.039729] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:07.847 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.847 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:07.847 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:07.847 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.847 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.847 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.847 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:07.847 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.847 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:07.847 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:07.847 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:07.847 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.847 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.107 [2024-11-19 12:31:13.107019] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.107 [2024-11-19 12:31:13.174231] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:08.107 [2024-11-19 12:31:13.174397] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:08.107 [2024-11-19 12:31:13.186129] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:08.107 [2024-11-19 12:31:13.186187] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:08.107 [2024-11-19 12:31:13.186200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.107 BaseBdev2 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.107 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.107 [ 00:11:08.107 { 00:11:08.107 "name": "BaseBdev2", 00:11:08.107 "aliases": [ 00:11:08.107 "2687a63b-bb1a-435c-843d-55066560a3a8" 00:11:08.107 ], 00:11:08.107 "product_name": "Malloc disk", 00:11:08.107 "block_size": 512, 00:11:08.107 "num_blocks": 65536, 00:11:08.107 "uuid": "2687a63b-bb1a-435c-843d-55066560a3a8", 00:11:08.107 "assigned_rate_limits": { 00:11:08.107 "rw_ios_per_sec": 0, 00:11:08.107 "rw_mbytes_per_sec": 0, 00:11:08.107 "r_mbytes_per_sec": 0, 00:11:08.107 "w_mbytes_per_sec": 0 00:11:08.107 }, 00:11:08.107 "claimed": false, 00:11:08.107 "zoned": false, 00:11:08.107 "supported_io_types": { 00:11:08.107 "read": true, 00:11:08.107 "write": true, 00:11:08.107 "unmap": true, 00:11:08.107 "flush": true, 00:11:08.107 "reset": true, 00:11:08.107 "nvme_admin": false, 00:11:08.107 "nvme_io": false, 00:11:08.107 "nvme_io_md": false, 00:11:08.107 "write_zeroes": true, 00:11:08.107 "zcopy": true, 00:11:08.107 "get_zone_info": false, 00:11:08.107 "zone_management": false, 00:11:08.107 "zone_append": false, 00:11:08.107 "compare": false, 00:11:08.107 "compare_and_write": false, 00:11:08.107 "abort": true, 00:11:08.107 "seek_hole": false, 00:11:08.107 "seek_data": false, 00:11:08.107 "copy": true, 00:11:08.107 "nvme_iov_md": false 00:11:08.107 }, 00:11:08.107 "memory_domains": [ 00:11:08.107 { 00:11:08.107 "dma_device_id": "system", 00:11:08.107 "dma_device_type": 1 00:11:08.107 }, 00:11:08.107 { 00:11:08.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.107 "dma_device_type": 2 00:11:08.107 } 00:11:08.107 ], 00:11:08.107 "driver_specific": {} 00:11:08.107 } 00:11:08.108 ] 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.108 BaseBdev3 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.108 [ 00:11:08.108 { 00:11:08.108 "name": "BaseBdev3", 00:11:08.108 "aliases": [ 00:11:08.108 "715fb341-cd16-4712-8757-173de2de1ae3" 00:11:08.108 ], 00:11:08.108 "product_name": "Malloc disk", 00:11:08.108 "block_size": 512, 00:11:08.108 "num_blocks": 65536, 00:11:08.108 "uuid": "715fb341-cd16-4712-8757-173de2de1ae3", 00:11:08.108 "assigned_rate_limits": { 00:11:08.108 "rw_ios_per_sec": 0, 00:11:08.108 "rw_mbytes_per_sec": 0, 00:11:08.108 "r_mbytes_per_sec": 0, 00:11:08.108 "w_mbytes_per_sec": 0 00:11:08.108 }, 00:11:08.108 "claimed": false, 00:11:08.108 "zoned": false, 00:11:08.108 "supported_io_types": { 00:11:08.108 "read": true, 00:11:08.108 "write": true, 00:11:08.108 "unmap": true, 00:11:08.108 "flush": true, 00:11:08.108 "reset": true, 00:11:08.108 "nvme_admin": false, 00:11:08.108 "nvme_io": false, 00:11:08.108 "nvme_io_md": false, 00:11:08.108 "write_zeroes": true, 00:11:08.108 "zcopy": true, 00:11:08.108 "get_zone_info": false, 00:11:08.108 "zone_management": false, 00:11:08.108 "zone_append": false, 00:11:08.108 "compare": false, 00:11:08.108 "compare_and_write": false, 00:11:08.108 "abort": true, 00:11:08.108 "seek_hole": false, 00:11:08.108 "seek_data": false, 00:11:08.108 "copy": true, 00:11:08.108 "nvme_iov_md": false 00:11:08.108 }, 00:11:08.108 "memory_domains": [ 00:11:08.108 { 00:11:08.108 "dma_device_id": "system", 00:11:08.108 "dma_device_type": 1 00:11:08.108 }, 00:11:08.108 { 00:11:08.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.108 "dma_device_type": 2 00:11:08.108 } 00:11:08.108 ], 00:11:08.108 "driver_specific": {} 00:11:08.108 } 00:11:08.108 ] 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.108 BaseBdev4 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.108 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.108 [ 00:11:08.108 { 00:11:08.368 "name": "BaseBdev4", 00:11:08.368 "aliases": [ 00:11:08.368 "82b22542-13e0-40b9-9ebe-24500d274ede" 00:11:08.368 ], 00:11:08.368 "product_name": "Malloc disk", 00:11:08.369 "block_size": 512, 00:11:08.369 "num_blocks": 65536, 00:11:08.369 "uuid": "82b22542-13e0-40b9-9ebe-24500d274ede", 00:11:08.369 "assigned_rate_limits": { 00:11:08.369 "rw_ios_per_sec": 0, 00:11:08.369 "rw_mbytes_per_sec": 0, 00:11:08.369 "r_mbytes_per_sec": 0, 00:11:08.369 "w_mbytes_per_sec": 0 00:11:08.369 }, 00:11:08.369 "claimed": false, 00:11:08.369 "zoned": false, 00:11:08.369 "supported_io_types": { 00:11:08.369 "read": true, 00:11:08.369 "write": true, 00:11:08.369 "unmap": true, 00:11:08.369 "flush": true, 00:11:08.369 "reset": true, 00:11:08.369 "nvme_admin": false, 00:11:08.369 "nvme_io": false, 00:11:08.369 "nvme_io_md": false, 00:11:08.369 "write_zeroes": true, 00:11:08.369 "zcopy": true, 00:11:08.369 "get_zone_info": false, 00:11:08.369 "zone_management": false, 00:11:08.369 "zone_append": false, 00:11:08.369 "compare": false, 00:11:08.369 "compare_and_write": false, 00:11:08.369 "abort": true, 00:11:08.369 "seek_hole": false, 00:11:08.369 "seek_data": false, 00:11:08.369 "copy": true, 00:11:08.369 "nvme_iov_md": false 00:11:08.369 }, 00:11:08.369 "memory_domains": [ 00:11:08.369 { 00:11:08.369 "dma_device_id": "system", 00:11:08.369 "dma_device_type": 1 00:11:08.369 }, 00:11:08.369 { 00:11:08.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.369 "dma_device_type": 2 00:11:08.369 } 00:11:08.369 ], 00:11:08.369 "driver_specific": {} 00:11:08.369 } 00:11:08.369 ] 00:11:08.369 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.369 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:08.369 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:08.369 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:08.369 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:08.369 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.369 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.369 [2024-11-19 12:31:13.383045] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:08.369 [2024-11-19 12:31:13.383163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:08.369 [2024-11-19 12:31:13.383202] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:08.369 [2024-11-19 12:31:13.384986] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:08.369 [2024-11-19 12:31:13.385068] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:08.369 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.369 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:08.369 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.369 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.369 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.369 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.369 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.369 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.369 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.369 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.369 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.369 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.369 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.369 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.369 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.369 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.369 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.369 "name": "Existed_Raid", 00:11:08.369 "uuid": "e3284200-437a-43cd-843a-f02f76faeaab", 00:11:08.369 "strip_size_kb": 0, 00:11:08.369 "state": "configuring", 00:11:08.369 "raid_level": "raid1", 00:11:08.369 "superblock": true, 00:11:08.369 "num_base_bdevs": 4, 00:11:08.369 "num_base_bdevs_discovered": 3, 00:11:08.369 "num_base_bdevs_operational": 4, 00:11:08.369 "base_bdevs_list": [ 00:11:08.369 { 00:11:08.369 "name": "BaseBdev1", 00:11:08.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.369 "is_configured": false, 00:11:08.369 "data_offset": 0, 00:11:08.369 "data_size": 0 00:11:08.369 }, 00:11:08.369 { 00:11:08.369 "name": "BaseBdev2", 00:11:08.369 "uuid": "2687a63b-bb1a-435c-843d-55066560a3a8", 00:11:08.369 "is_configured": true, 00:11:08.369 "data_offset": 2048, 00:11:08.369 "data_size": 63488 00:11:08.369 }, 00:11:08.369 { 00:11:08.369 "name": "BaseBdev3", 00:11:08.369 "uuid": "715fb341-cd16-4712-8757-173de2de1ae3", 00:11:08.369 "is_configured": true, 00:11:08.369 "data_offset": 2048, 00:11:08.369 "data_size": 63488 00:11:08.369 }, 00:11:08.369 { 00:11:08.369 "name": "BaseBdev4", 00:11:08.369 "uuid": "82b22542-13e0-40b9-9ebe-24500d274ede", 00:11:08.369 "is_configured": true, 00:11:08.369 "data_offset": 2048, 00:11:08.369 "data_size": 63488 00:11:08.369 } 00:11:08.369 ] 00:11:08.369 }' 00:11:08.369 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.369 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.636 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:08.636 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.636 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.636 [2024-11-19 12:31:13.826318] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:08.636 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.636 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:08.636 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.636 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.636 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.636 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.636 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.636 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.636 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.636 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.636 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.636 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.636 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.636 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.636 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.636 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.636 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.636 "name": "Existed_Raid", 00:11:08.636 "uuid": "e3284200-437a-43cd-843a-f02f76faeaab", 00:11:08.636 "strip_size_kb": 0, 00:11:08.636 "state": "configuring", 00:11:08.636 "raid_level": "raid1", 00:11:08.636 "superblock": true, 00:11:08.636 "num_base_bdevs": 4, 00:11:08.636 "num_base_bdevs_discovered": 2, 00:11:08.636 "num_base_bdevs_operational": 4, 00:11:08.636 "base_bdevs_list": [ 00:11:08.636 { 00:11:08.636 "name": "BaseBdev1", 00:11:08.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.636 "is_configured": false, 00:11:08.636 "data_offset": 0, 00:11:08.636 "data_size": 0 00:11:08.636 }, 00:11:08.636 { 00:11:08.636 "name": null, 00:11:08.636 "uuid": "2687a63b-bb1a-435c-843d-55066560a3a8", 00:11:08.636 "is_configured": false, 00:11:08.636 "data_offset": 0, 00:11:08.636 "data_size": 63488 00:11:08.636 }, 00:11:08.636 { 00:11:08.636 "name": "BaseBdev3", 00:11:08.636 "uuid": "715fb341-cd16-4712-8757-173de2de1ae3", 00:11:08.636 "is_configured": true, 00:11:08.636 "data_offset": 2048, 00:11:08.636 "data_size": 63488 00:11:08.636 }, 00:11:08.636 { 00:11:08.636 "name": "BaseBdev4", 00:11:08.636 "uuid": "82b22542-13e0-40b9-9ebe-24500d274ede", 00:11:08.636 "is_configured": true, 00:11:08.636 "data_offset": 2048, 00:11:08.636 "data_size": 63488 00:11:08.636 } 00:11:08.636 ] 00:11:08.636 }' 00:11:08.636 12:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.636 12:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.217 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.218 [2024-11-19 12:31:14.272776] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:09.218 BaseBdev1 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.218 [ 00:11:09.218 { 00:11:09.218 "name": "BaseBdev1", 00:11:09.218 "aliases": [ 00:11:09.218 "ad4ca12c-9552-49aa-a7e7-1584f19bce52" 00:11:09.218 ], 00:11:09.218 "product_name": "Malloc disk", 00:11:09.218 "block_size": 512, 00:11:09.218 "num_blocks": 65536, 00:11:09.218 "uuid": "ad4ca12c-9552-49aa-a7e7-1584f19bce52", 00:11:09.218 "assigned_rate_limits": { 00:11:09.218 "rw_ios_per_sec": 0, 00:11:09.218 "rw_mbytes_per_sec": 0, 00:11:09.218 "r_mbytes_per_sec": 0, 00:11:09.218 "w_mbytes_per_sec": 0 00:11:09.218 }, 00:11:09.218 "claimed": true, 00:11:09.218 "claim_type": "exclusive_write", 00:11:09.218 "zoned": false, 00:11:09.218 "supported_io_types": { 00:11:09.218 "read": true, 00:11:09.218 "write": true, 00:11:09.218 "unmap": true, 00:11:09.218 "flush": true, 00:11:09.218 "reset": true, 00:11:09.218 "nvme_admin": false, 00:11:09.218 "nvme_io": false, 00:11:09.218 "nvme_io_md": false, 00:11:09.218 "write_zeroes": true, 00:11:09.218 "zcopy": true, 00:11:09.218 "get_zone_info": false, 00:11:09.218 "zone_management": false, 00:11:09.218 "zone_append": false, 00:11:09.218 "compare": false, 00:11:09.218 "compare_and_write": false, 00:11:09.218 "abort": true, 00:11:09.218 "seek_hole": false, 00:11:09.218 "seek_data": false, 00:11:09.218 "copy": true, 00:11:09.218 "nvme_iov_md": false 00:11:09.218 }, 00:11:09.218 "memory_domains": [ 00:11:09.218 { 00:11:09.218 "dma_device_id": "system", 00:11:09.218 "dma_device_type": 1 00:11:09.218 }, 00:11:09.218 { 00:11:09.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.218 "dma_device_type": 2 00:11:09.218 } 00:11:09.218 ], 00:11:09.218 "driver_specific": {} 00:11:09.218 } 00:11:09.218 ] 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.218 "name": "Existed_Raid", 00:11:09.218 "uuid": "e3284200-437a-43cd-843a-f02f76faeaab", 00:11:09.218 "strip_size_kb": 0, 00:11:09.218 "state": "configuring", 00:11:09.218 "raid_level": "raid1", 00:11:09.218 "superblock": true, 00:11:09.218 "num_base_bdevs": 4, 00:11:09.218 "num_base_bdevs_discovered": 3, 00:11:09.218 "num_base_bdevs_operational": 4, 00:11:09.218 "base_bdevs_list": [ 00:11:09.218 { 00:11:09.218 "name": "BaseBdev1", 00:11:09.218 "uuid": "ad4ca12c-9552-49aa-a7e7-1584f19bce52", 00:11:09.218 "is_configured": true, 00:11:09.218 "data_offset": 2048, 00:11:09.218 "data_size": 63488 00:11:09.218 }, 00:11:09.218 { 00:11:09.218 "name": null, 00:11:09.218 "uuid": "2687a63b-bb1a-435c-843d-55066560a3a8", 00:11:09.218 "is_configured": false, 00:11:09.218 "data_offset": 0, 00:11:09.218 "data_size": 63488 00:11:09.218 }, 00:11:09.218 { 00:11:09.218 "name": "BaseBdev3", 00:11:09.218 "uuid": "715fb341-cd16-4712-8757-173de2de1ae3", 00:11:09.218 "is_configured": true, 00:11:09.218 "data_offset": 2048, 00:11:09.218 "data_size": 63488 00:11:09.218 }, 00:11:09.218 { 00:11:09.218 "name": "BaseBdev4", 00:11:09.218 "uuid": "82b22542-13e0-40b9-9ebe-24500d274ede", 00:11:09.218 "is_configured": true, 00:11:09.218 "data_offset": 2048, 00:11:09.218 "data_size": 63488 00:11:09.218 } 00:11:09.218 ] 00:11:09.218 }' 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.218 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.788 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.788 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.788 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.788 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:09.788 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.788 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:09.788 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:09.788 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.788 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.788 [2024-11-19 12:31:14.851854] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:09.788 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.788 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:09.788 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.788 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.788 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.788 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.788 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.788 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.788 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.788 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.788 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.788 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.788 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.788 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.788 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.788 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.788 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.788 "name": "Existed_Raid", 00:11:09.788 "uuid": "e3284200-437a-43cd-843a-f02f76faeaab", 00:11:09.788 "strip_size_kb": 0, 00:11:09.788 "state": "configuring", 00:11:09.788 "raid_level": "raid1", 00:11:09.788 "superblock": true, 00:11:09.788 "num_base_bdevs": 4, 00:11:09.788 "num_base_bdevs_discovered": 2, 00:11:09.788 "num_base_bdevs_operational": 4, 00:11:09.788 "base_bdevs_list": [ 00:11:09.788 { 00:11:09.788 "name": "BaseBdev1", 00:11:09.788 "uuid": "ad4ca12c-9552-49aa-a7e7-1584f19bce52", 00:11:09.788 "is_configured": true, 00:11:09.788 "data_offset": 2048, 00:11:09.788 "data_size": 63488 00:11:09.788 }, 00:11:09.788 { 00:11:09.788 "name": null, 00:11:09.788 "uuid": "2687a63b-bb1a-435c-843d-55066560a3a8", 00:11:09.788 "is_configured": false, 00:11:09.788 "data_offset": 0, 00:11:09.788 "data_size": 63488 00:11:09.788 }, 00:11:09.788 { 00:11:09.788 "name": null, 00:11:09.788 "uuid": "715fb341-cd16-4712-8757-173de2de1ae3", 00:11:09.788 "is_configured": false, 00:11:09.788 "data_offset": 0, 00:11:09.788 "data_size": 63488 00:11:09.788 }, 00:11:09.788 { 00:11:09.788 "name": "BaseBdev4", 00:11:09.788 "uuid": "82b22542-13e0-40b9-9ebe-24500d274ede", 00:11:09.788 "is_configured": true, 00:11:09.788 "data_offset": 2048, 00:11:09.788 "data_size": 63488 00:11:09.788 } 00:11:09.788 ] 00:11:09.788 }' 00:11:09.788 12:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.788 12:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.358 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.358 12:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.358 12:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.358 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:10.358 12:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.358 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:10.358 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:10.358 12:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.358 12:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.358 [2024-11-19 12:31:15.387123] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:10.358 12:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.358 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:10.358 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.358 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.358 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.358 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.358 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.358 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.358 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.358 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.359 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.359 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.359 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.359 12:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.359 12:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.359 12:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.359 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.359 "name": "Existed_Raid", 00:11:10.359 "uuid": "e3284200-437a-43cd-843a-f02f76faeaab", 00:11:10.359 "strip_size_kb": 0, 00:11:10.359 "state": "configuring", 00:11:10.359 "raid_level": "raid1", 00:11:10.359 "superblock": true, 00:11:10.359 "num_base_bdevs": 4, 00:11:10.359 "num_base_bdevs_discovered": 3, 00:11:10.359 "num_base_bdevs_operational": 4, 00:11:10.359 "base_bdevs_list": [ 00:11:10.359 { 00:11:10.359 "name": "BaseBdev1", 00:11:10.359 "uuid": "ad4ca12c-9552-49aa-a7e7-1584f19bce52", 00:11:10.359 "is_configured": true, 00:11:10.359 "data_offset": 2048, 00:11:10.359 "data_size": 63488 00:11:10.359 }, 00:11:10.359 { 00:11:10.359 "name": null, 00:11:10.359 "uuid": "2687a63b-bb1a-435c-843d-55066560a3a8", 00:11:10.359 "is_configured": false, 00:11:10.359 "data_offset": 0, 00:11:10.359 "data_size": 63488 00:11:10.359 }, 00:11:10.359 { 00:11:10.359 "name": "BaseBdev3", 00:11:10.359 "uuid": "715fb341-cd16-4712-8757-173de2de1ae3", 00:11:10.359 "is_configured": true, 00:11:10.359 "data_offset": 2048, 00:11:10.359 "data_size": 63488 00:11:10.359 }, 00:11:10.359 { 00:11:10.359 "name": "BaseBdev4", 00:11:10.359 "uuid": "82b22542-13e0-40b9-9ebe-24500d274ede", 00:11:10.359 "is_configured": true, 00:11:10.359 "data_offset": 2048, 00:11:10.359 "data_size": 63488 00:11:10.359 } 00:11:10.359 ] 00:11:10.359 }' 00:11:10.359 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.359 12:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.619 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:10.619 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.619 12:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.619 12:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.619 12:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.619 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:10.619 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:10.619 12:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.619 12:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.619 [2024-11-19 12:31:15.858498] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:10.619 12:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.619 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:10.619 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.619 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.619 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.619 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.619 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.619 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.619 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.619 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.619 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.879 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.879 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.879 12:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.879 12:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.879 12:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.879 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.879 "name": "Existed_Raid", 00:11:10.879 "uuid": "e3284200-437a-43cd-843a-f02f76faeaab", 00:11:10.879 "strip_size_kb": 0, 00:11:10.879 "state": "configuring", 00:11:10.879 "raid_level": "raid1", 00:11:10.879 "superblock": true, 00:11:10.879 "num_base_bdevs": 4, 00:11:10.879 "num_base_bdevs_discovered": 2, 00:11:10.879 "num_base_bdevs_operational": 4, 00:11:10.879 "base_bdevs_list": [ 00:11:10.879 { 00:11:10.879 "name": null, 00:11:10.879 "uuid": "ad4ca12c-9552-49aa-a7e7-1584f19bce52", 00:11:10.879 "is_configured": false, 00:11:10.879 "data_offset": 0, 00:11:10.879 "data_size": 63488 00:11:10.879 }, 00:11:10.879 { 00:11:10.879 "name": null, 00:11:10.879 "uuid": "2687a63b-bb1a-435c-843d-55066560a3a8", 00:11:10.879 "is_configured": false, 00:11:10.879 "data_offset": 0, 00:11:10.879 "data_size": 63488 00:11:10.879 }, 00:11:10.879 { 00:11:10.879 "name": "BaseBdev3", 00:11:10.879 "uuid": "715fb341-cd16-4712-8757-173de2de1ae3", 00:11:10.879 "is_configured": true, 00:11:10.879 "data_offset": 2048, 00:11:10.879 "data_size": 63488 00:11:10.879 }, 00:11:10.879 { 00:11:10.879 "name": "BaseBdev4", 00:11:10.879 "uuid": "82b22542-13e0-40b9-9ebe-24500d274ede", 00:11:10.879 "is_configured": true, 00:11:10.879 "data_offset": 2048, 00:11:10.879 "data_size": 63488 00:11:10.879 } 00:11:10.879 ] 00:11:10.879 }' 00:11:10.879 12:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.879 12:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.139 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:11.139 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.139 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.139 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.139 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.139 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:11.139 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:11.139 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.139 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.139 [2024-11-19 12:31:16.348117] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:11.139 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.139 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:11.139 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.139 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.139 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:11.139 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:11.139 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.139 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.139 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.139 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.139 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.139 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.139 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.139 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.139 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.139 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.399 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.399 "name": "Existed_Raid", 00:11:11.399 "uuid": "e3284200-437a-43cd-843a-f02f76faeaab", 00:11:11.399 "strip_size_kb": 0, 00:11:11.399 "state": "configuring", 00:11:11.399 "raid_level": "raid1", 00:11:11.399 "superblock": true, 00:11:11.399 "num_base_bdevs": 4, 00:11:11.399 "num_base_bdevs_discovered": 3, 00:11:11.399 "num_base_bdevs_operational": 4, 00:11:11.399 "base_bdevs_list": [ 00:11:11.399 { 00:11:11.399 "name": null, 00:11:11.399 "uuid": "ad4ca12c-9552-49aa-a7e7-1584f19bce52", 00:11:11.399 "is_configured": false, 00:11:11.399 "data_offset": 0, 00:11:11.399 "data_size": 63488 00:11:11.399 }, 00:11:11.399 { 00:11:11.399 "name": "BaseBdev2", 00:11:11.399 "uuid": "2687a63b-bb1a-435c-843d-55066560a3a8", 00:11:11.399 "is_configured": true, 00:11:11.399 "data_offset": 2048, 00:11:11.399 "data_size": 63488 00:11:11.399 }, 00:11:11.399 { 00:11:11.399 "name": "BaseBdev3", 00:11:11.399 "uuid": "715fb341-cd16-4712-8757-173de2de1ae3", 00:11:11.399 "is_configured": true, 00:11:11.399 "data_offset": 2048, 00:11:11.399 "data_size": 63488 00:11:11.399 }, 00:11:11.399 { 00:11:11.399 "name": "BaseBdev4", 00:11:11.399 "uuid": "82b22542-13e0-40b9-9ebe-24500d274ede", 00:11:11.399 "is_configured": true, 00:11:11.399 "data_offset": 2048, 00:11:11.399 "data_size": 63488 00:11:11.399 } 00:11:11.399 ] 00:11:11.399 }' 00:11:11.399 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.399 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.659 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:11.659 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.659 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.659 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.659 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.659 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:11.659 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.659 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.659 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.659 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:11.659 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.659 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ad4ca12c-9552-49aa-a7e7-1584f19bce52 00:11:11.659 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.659 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.659 [2024-11-19 12:31:16.870518] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:11.659 [2024-11-19 12:31:16.870756] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:11:11.659 [2024-11-19 12:31:16.870805] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:11.659 NewBaseBdev 00:11:11.659 [2024-11-19 12:31:16.871097] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:11.659 [2024-11-19 12:31:16.871261] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:11:11.659 [2024-11-19 12:31:16.871273] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:11:11.659 [2024-11-19 12:31:16.871388] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.659 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.659 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:11.659 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:11.659 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:11.659 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:11.659 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:11.659 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:11.659 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:11.659 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.659 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.659 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.659 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:11.659 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.659 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.659 [ 00:11:11.659 { 00:11:11.659 "name": "NewBaseBdev", 00:11:11.659 "aliases": [ 00:11:11.659 "ad4ca12c-9552-49aa-a7e7-1584f19bce52" 00:11:11.659 ], 00:11:11.659 "product_name": "Malloc disk", 00:11:11.659 "block_size": 512, 00:11:11.659 "num_blocks": 65536, 00:11:11.659 "uuid": "ad4ca12c-9552-49aa-a7e7-1584f19bce52", 00:11:11.659 "assigned_rate_limits": { 00:11:11.659 "rw_ios_per_sec": 0, 00:11:11.659 "rw_mbytes_per_sec": 0, 00:11:11.659 "r_mbytes_per_sec": 0, 00:11:11.659 "w_mbytes_per_sec": 0 00:11:11.659 }, 00:11:11.660 "claimed": true, 00:11:11.660 "claim_type": "exclusive_write", 00:11:11.660 "zoned": false, 00:11:11.660 "supported_io_types": { 00:11:11.660 "read": true, 00:11:11.660 "write": true, 00:11:11.660 "unmap": true, 00:11:11.660 "flush": true, 00:11:11.660 "reset": true, 00:11:11.660 "nvme_admin": false, 00:11:11.660 "nvme_io": false, 00:11:11.660 "nvme_io_md": false, 00:11:11.660 "write_zeroes": true, 00:11:11.660 "zcopy": true, 00:11:11.660 "get_zone_info": false, 00:11:11.660 "zone_management": false, 00:11:11.660 "zone_append": false, 00:11:11.660 "compare": false, 00:11:11.660 "compare_and_write": false, 00:11:11.660 "abort": true, 00:11:11.660 "seek_hole": false, 00:11:11.660 "seek_data": false, 00:11:11.660 "copy": true, 00:11:11.660 "nvme_iov_md": false 00:11:11.660 }, 00:11:11.660 "memory_domains": [ 00:11:11.660 { 00:11:11.660 "dma_device_id": "system", 00:11:11.660 "dma_device_type": 1 00:11:11.660 }, 00:11:11.660 { 00:11:11.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.660 "dma_device_type": 2 00:11:11.660 } 00:11:11.660 ], 00:11:11.660 "driver_specific": {} 00:11:11.660 } 00:11:11.660 ] 00:11:11.660 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.660 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:11.660 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:11.660 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.660 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:11.660 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:11.660 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:11.660 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.660 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.660 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.660 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.660 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.660 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.660 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.920 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.920 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.920 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.920 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.920 "name": "Existed_Raid", 00:11:11.920 "uuid": "e3284200-437a-43cd-843a-f02f76faeaab", 00:11:11.920 "strip_size_kb": 0, 00:11:11.920 "state": "online", 00:11:11.920 "raid_level": "raid1", 00:11:11.920 "superblock": true, 00:11:11.920 "num_base_bdevs": 4, 00:11:11.920 "num_base_bdevs_discovered": 4, 00:11:11.920 "num_base_bdevs_operational": 4, 00:11:11.920 "base_bdevs_list": [ 00:11:11.920 { 00:11:11.920 "name": "NewBaseBdev", 00:11:11.920 "uuid": "ad4ca12c-9552-49aa-a7e7-1584f19bce52", 00:11:11.920 "is_configured": true, 00:11:11.920 "data_offset": 2048, 00:11:11.920 "data_size": 63488 00:11:11.920 }, 00:11:11.920 { 00:11:11.920 "name": "BaseBdev2", 00:11:11.920 "uuid": "2687a63b-bb1a-435c-843d-55066560a3a8", 00:11:11.920 "is_configured": true, 00:11:11.920 "data_offset": 2048, 00:11:11.920 "data_size": 63488 00:11:11.920 }, 00:11:11.920 { 00:11:11.920 "name": "BaseBdev3", 00:11:11.920 "uuid": "715fb341-cd16-4712-8757-173de2de1ae3", 00:11:11.920 "is_configured": true, 00:11:11.920 "data_offset": 2048, 00:11:11.920 "data_size": 63488 00:11:11.920 }, 00:11:11.920 { 00:11:11.920 "name": "BaseBdev4", 00:11:11.920 "uuid": "82b22542-13e0-40b9-9ebe-24500d274ede", 00:11:11.920 "is_configured": true, 00:11:11.920 "data_offset": 2048, 00:11:11.920 "data_size": 63488 00:11:11.920 } 00:11:11.920 ] 00:11:11.920 }' 00:11:11.920 12:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.920 12:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.181 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:12.181 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:12.181 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:12.181 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:12.181 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:12.181 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:12.181 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:12.181 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:12.181 12:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.181 12:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.181 [2024-11-19 12:31:17.346110] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:12.181 12:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.181 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:12.181 "name": "Existed_Raid", 00:11:12.181 "aliases": [ 00:11:12.181 "e3284200-437a-43cd-843a-f02f76faeaab" 00:11:12.181 ], 00:11:12.181 "product_name": "Raid Volume", 00:11:12.181 "block_size": 512, 00:11:12.181 "num_blocks": 63488, 00:11:12.181 "uuid": "e3284200-437a-43cd-843a-f02f76faeaab", 00:11:12.181 "assigned_rate_limits": { 00:11:12.181 "rw_ios_per_sec": 0, 00:11:12.181 "rw_mbytes_per_sec": 0, 00:11:12.181 "r_mbytes_per_sec": 0, 00:11:12.181 "w_mbytes_per_sec": 0 00:11:12.181 }, 00:11:12.181 "claimed": false, 00:11:12.181 "zoned": false, 00:11:12.181 "supported_io_types": { 00:11:12.181 "read": true, 00:11:12.181 "write": true, 00:11:12.181 "unmap": false, 00:11:12.181 "flush": false, 00:11:12.181 "reset": true, 00:11:12.181 "nvme_admin": false, 00:11:12.181 "nvme_io": false, 00:11:12.181 "nvme_io_md": false, 00:11:12.181 "write_zeroes": true, 00:11:12.181 "zcopy": false, 00:11:12.181 "get_zone_info": false, 00:11:12.181 "zone_management": false, 00:11:12.181 "zone_append": false, 00:11:12.181 "compare": false, 00:11:12.181 "compare_and_write": false, 00:11:12.181 "abort": false, 00:11:12.181 "seek_hole": false, 00:11:12.181 "seek_data": false, 00:11:12.181 "copy": false, 00:11:12.181 "nvme_iov_md": false 00:11:12.181 }, 00:11:12.181 "memory_domains": [ 00:11:12.181 { 00:11:12.181 "dma_device_id": "system", 00:11:12.181 "dma_device_type": 1 00:11:12.181 }, 00:11:12.181 { 00:11:12.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.181 "dma_device_type": 2 00:11:12.181 }, 00:11:12.181 { 00:11:12.181 "dma_device_id": "system", 00:11:12.181 "dma_device_type": 1 00:11:12.181 }, 00:11:12.181 { 00:11:12.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.181 "dma_device_type": 2 00:11:12.181 }, 00:11:12.181 { 00:11:12.181 "dma_device_id": "system", 00:11:12.181 "dma_device_type": 1 00:11:12.181 }, 00:11:12.181 { 00:11:12.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.181 "dma_device_type": 2 00:11:12.181 }, 00:11:12.181 { 00:11:12.181 "dma_device_id": "system", 00:11:12.181 "dma_device_type": 1 00:11:12.181 }, 00:11:12.181 { 00:11:12.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.181 "dma_device_type": 2 00:11:12.181 } 00:11:12.181 ], 00:11:12.181 "driver_specific": { 00:11:12.181 "raid": { 00:11:12.181 "uuid": "e3284200-437a-43cd-843a-f02f76faeaab", 00:11:12.181 "strip_size_kb": 0, 00:11:12.181 "state": "online", 00:11:12.181 "raid_level": "raid1", 00:11:12.181 "superblock": true, 00:11:12.181 "num_base_bdevs": 4, 00:11:12.181 "num_base_bdevs_discovered": 4, 00:11:12.181 "num_base_bdevs_operational": 4, 00:11:12.181 "base_bdevs_list": [ 00:11:12.181 { 00:11:12.181 "name": "NewBaseBdev", 00:11:12.181 "uuid": "ad4ca12c-9552-49aa-a7e7-1584f19bce52", 00:11:12.181 "is_configured": true, 00:11:12.181 "data_offset": 2048, 00:11:12.181 "data_size": 63488 00:11:12.181 }, 00:11:12.182 { 00:11:12.182 "name": "BaseBdev2", 00:11:12.182 "uuid": "2687a63b-bb1a-435c-843d-55066560a3a8", 00:11:12.182 "is_configured": true, 00:11:12.182 "data_offset": 2048, 00:11:12.182 "data_size": 63488 00:11:12.182 }, 00:11:12.182 { 00:11:12.182 "name": "BaseBdev3", 00:11:12.182 "uuid": "715fb341-cd16-4712-8757-173de2de1ae3", 00:11:12.182 "is_configured": true, 00:11:12.182 "data_offset": 2048, 00:11:12.182 "data_size": 63488 00:11:12.182 }, 00:11:12.182 { 00:11:12.182 "name": "BaseBdev4", 00:11:12.182 "uuid": "82b22542-13e0-40b9-9ebe-24500d274ede", 00:11:12.182 "is_configured": true, 00:11:12.182 "data_offset": 2048, 00:11:12.182 "data_size": 63488 00:11:12.182 } 00:11:12.182 ] 00:11:12.182 } 00:11:12.182 } 00:11:12.182 }' 00:11:12.182 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:12.182 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:12.182 BaseBdev2 00:11:12.182 BaseBdev3 00:11:12.182 BaseBdev4' 00:11:12.182 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.441 [2024-11-19 12:31:17.693197] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:12.441 [2024-11-19 12:31:17.693230] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:12.441 [2024-11-19 12:31:17.693302] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:12.441 [2024-11-19 12:31:17.693563] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:12.441 [2024-11-19 12:31:17.693584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84814 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 84814 ']' 00:11:12.441 12:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 84814 00:11:12.701 12:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:12.701 12:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:12.701 12:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84814 00:11:12.701 12:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:12.701 killing process with pid 84814 00:11:12.701 12:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:12.701 12:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84814' 00:11:12.701 12:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 84814 00:11:12.701 [2024-11-19 12:31:17.741873] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:12.701 12:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 84814 00:11:12.701 [2024-11-19 12:31:17.783372] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:12.961 ************************************ 00:11:12.961 END TEST raid_state_function_test_sb 00:11:12.961 ************************************ 00:11:12.961 12:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:12.961 00:11:12.961 real 0m9.502s 00:11:12.961 user 0m16.278s 00:11:12.961 sys 0m2.013s 00:11:12.961 12:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:12.961 12:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.961 12:31:18 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:12.961 12:31:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:12.961 12:31:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:12.961 12:31:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:12.961 ************************************ 00:11:12.961 START TEST raid_superblock_test 00:11:12.961 ************************************ 00:11:12.961 12:31:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:11:12.961 12:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:12.961 12:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:12.961 12:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:12.961 12:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:12.961 12:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:12.961 12:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:12.961 12:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:12.961 12:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:12.961 12:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:12.961 12:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:12.961 12:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:12.961 12:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:12.961 12:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:12.961 12:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:12.961 12:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:12.961 12:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=85469 00:11:12.961 12:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:12.961 12:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 85469 00:11:12.961 12:31:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 85469 ']' 00:11:12.961 12:31:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.961 12:31:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:12.962 12:31:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.962 12:31:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:12.962 12:31:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.962 [2024-11-19 12:31:18.171441] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:12.962 [2024-11-19 12:31:18.171665] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85469 ] 00:11:13.221 [2024-11-19 12:31:18.331037] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.221 [2024-11-19 12:31:18.378237] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.221 [2024-11-19 12:31:18.422224] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.221 [2024-11-19 12:31:18.422338] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.792 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:13.792 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:13.792 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:13.792 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:13.792 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:13.792 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:13.792 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:13.792 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:13.792 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:13.792 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:13.792 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:13.792 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.792 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.792 malloc1 00:11:13.792 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.792 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:13.792 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.792 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.792 [2024-11-19 12:31:19.037716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:13.792 [2024-11-19 12:31:19.037884] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.792 [2024-11-19 12:31:19.037953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:13.792 [2024-11-19 12:31:19.037997] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.792 [2024-11-19 12:31:19.040165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.792 [2024-11-19 12:31:19.040243] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:13.792 pt1 00:11:13.792 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.792 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:13.792 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:13.792 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:13.792 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:13.792 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:13.792 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:13.792 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:13.792 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:13.792 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:13.792 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.792 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.052 malloc2 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.052 [2024-11-19 12:31:19.078829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:14.052 [2024-11-19 12:31:19.078959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.052 [2024-11-19 12:31:19.078984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:14.052 [2024-11-19 12:31:19.078998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.052 [2024-11-19 12:31:19.081409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.052 [2024-11-19 12:31:19.081448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:14.052 pt2 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.052 malloc3 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.052 [2024-11-19 12:31:19.107612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:14.052 [2024-11-19 12:31:19.107728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.052 [2024-11-19 12:31:19.107780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:14.052 [2024-11-19 12:31:19.107813] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.052 [2024-11-19 12:31:19.109909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.052 [2024-11-19 12:31:19.109982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:14.052 pt3 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.052 malloc4 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.052 [2024-11-19 12:31:19.140657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:14.052 [2024-11-19 12:31:19.140808] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.052 [2024-11-19 12:31:19.140849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:14.052 [2024-11-19 12:31:19.140891] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.052 [2024-11-19 12:31:19.143274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.052 [2024-11-19 12:31:19.143360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:14.052 pt4 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.052 [2024-11-19 12:31:19.152701] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:14.052 [2024-11-19 12:31:19.154571] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:14.052 [2024-11-19 12:31:19.154667] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:14.052 [2024-11-19 12:31:19.154770] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:14.052 [2024-11-19 12:31:19.154977] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:14.052 [2024-11-19 12:31:19.155029] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:14.052 [2024-11-19 12:31:19.155333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:14.052 [2024-11-19 12:31:19.155525] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:14.052 [2024-11-19 12:31:19.155571] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:14.052 [2024-11-19 12:31:19.155726] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.052 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.052 "name": "raid_bdev1", 00:11:14.052 "uuid": "e140a64c-3d45-41b9-b629-37f6a9e61a0f", 00:11:14.053 "strip_size_kb": 0, 00:11:14.053 "state": "online", 00:11:14.053 "raid_level": "raid1", 00:11:14.053 "superblock": true, 00:11:14.053 "num_base_bdevs": 4, 00:11:14.053 "num_base_bdevs_discovered": 4, 00:11:14.053 "num_base_bdevs_operational": 4, 00:11:14.053 "base_bdevs_list": [ 00:11:14.053 { 00:11:14.053 "name": "pt1", 00:11:14.053 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:14.053 "is_configured": true, 00:11:14.053 "data_offset": 2048, 00:11:14.053 "data_size": 63488 00:11:14.053 }, 00:11:14.053 { 00:11:14.053 "name": "pt2", 00:11:14.053 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:14.053 "is_configured": true, 00:11:14.053 "data_offset": 2048, 00:11:14.053 "data_size": 63488 00:11:14.053 }, 00:11:14.053 { 00:11:14.053 "name": "pt3", 00:11:14.053 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:14.053 "is_configured": true, 00:11:14.053 "data_offset": 2048, 00:11:14.053 "data_size": 63488 00:11:14.053 }, 00:11:14.053 { 00:11:14.053 "name": "pt4", 00:11:14.053 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:14.053 "is_configured": true, 00:11:14.053 "data_offset": 2048, 00:11:14.053 "data_size": 63488 00:11:14.053 } 00:11:14.053 ] 00:11:14.053 }' 00:11:14.053 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.053 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.622 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:14.622 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:14.622 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:14.622 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:14.622 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:14.622 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:14.622 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:14.622 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.622 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.622 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:14.622 [2024-11-19 12:31:19.600258] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:14.622 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.622 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:14.622 "name": "raid_bdev1", 00:11:14.622 "aliases": [ 00:11:14.622 "e140a64c-3d45-41b9-b629-37f6a9e61a0f" 00:11:14.622 ], 00:11:14.622 "product_name": "Raid Volume", 00:11:14.622 "block_size": 512, 00:11:14.622 "num_blocks": 63488, 00:11:14.622 "uuid": "e140a64c-3d45-41b9-b629-37f6a9e61a0f", 00:11:14.622 "assigned_rate_limits": { 00:11:14.622 "rw_ios_per_sec": 0, 00:11:14.622 "rw_mbytes_per_sec": 0, 00:11:14.622 "r_mbytes_per_sec": 0, 00:11:14.622 "w_mbytes_per_sec": 0 00:11:14.622 }, 00:11:14.622 "claimed": false, 00:11:14.622 "zoned": false, 00:11:14.622 "supported_io_types": { 00:11:14.622 "read": true, 00:11:14.622 "write": true, 00:11:14.622 "unmap": false, 00:11:14.622 "flush": false, 00:11:14.622 "reset": true, 00:11:14.622 "nvme_admin": false, 00:11:14.622 "nvme_io": false, 00:11:14.622 "nvme_io_md": false, 00:11:14.622 "write_zeroes": true, 00:11:14.622 "zcopy": false, 00:11:14.622 "get_zone_info": false, 00:11:14.622 "zone_management": false, 00:11:14.622 "zone_append": false, 00:11:14.622 "compare": false, 00:11:14.622 "compare_and_write": false, 00:11:14.622 "abort": false, 00:11:14.622 "seek_hole": false, 00:11:14.622 "seek_data": false, 00:11:14.622 "copy": false, 00:11:14.622 "nvme_iov_md": false 00:11:14.622 }, 00:11:14.622 "memory_domains": [ 00:11:14.622 { 00:11:14.622 "dma_device_id": "system", 00:11:14.622 "dma_device_type": 1 00:11:14.622 }, 00:11:14.622 { 00:11:14.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.622 "dma_device_type": 2 00:11:14.622 }, 00:11:14.622 { 00:11:14.622 "dma_device_id": "system", 00:11:14.622 "dma_device_type": 1 00:11:14.622 }, 00:11:14.622 { 00:11:14.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.622 "dma_device_type": 2 00:11:14.622 }, 00:11:14.622 { 00:11:14.622 "dma_device_id": "system", 00:11:14.622 "dma_device_type": 1 00:11:14.622 }, 00:11:14.622 { 00:11:14.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.622 "dma_device_type": 2 00:11:14.622 }, 00:11:14.622 { 00:11:14.622 "dma_device_id": "system", 00:11:14.622 "dma_device_type": 1 00:11:14.622 }, 00:11:14.622 { 00:11:14.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.622 "dma_device_type": 2 00:11:14.622 } 00:11:14.622 ], 00:11:14.622 "driver_specific": { 00:11:14.622 "raid": { 00:11:14.622 "uuid": "e140a64c-3d45-41b9-b629-37f6a9e61a0f", 00:11:14.622 "strip_size_kb": 0, 00:11:14.622 "state": "online", 00:11:14.622 "raid_level": "raid1", 00:11:14.622 "superblock": true, 00:11:14.622 "num_base_bdevs": 4, 00:11:14.622 "num_base_bdevs_discovered": 4, 00:11:14.623 "num_base_bdevs_operational": 4, 00:11:14.623 "base_bdevs_list": [ 00:11:14.623 { 00:11:14.623 "name": "pt1", 00:11:14.623 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:14.623 "is_configured": true, 00:11:14.623 "data_offset": 2048, 00:11:14.623 "data_size": 63488 00:11:14.623 }, 00:11:14.623 { 00:11:14.623 "name": "pt2", 00:11:14.623 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:14.623 "is_configured": true, 00:11:14.623 "data_offset": 2048, 00:11:14.623 "data_size": 63488 00:11:14.623 }, 00:11:14.623 { 00:11:14.623 "name": "pt3", 00:11:14.623 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:14.623 "is_configured": true, 00:11:14.623 "data_offset": 2048, 00:11:14.623 "data_size": 63488 00:11:14.623 }, 00:11:14.623 { 00:11:14.623 "name": "pt4", 00:11:14.623 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:14.623 "is_configured": true, 00:11:14.623 "data_offset": 2048, 00:11:14.623 "data_size": 63488 00:11:14.623 } 00:11:14.623 ] 00:11:14.623 } 00:11:14.623 } 00:11:14.623 }' 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:14.623 pt2 00:11:14.623 pt3 00:11:14.623 pt4' 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.623 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.882 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.882 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.882 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.882 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:14.882 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:14.882 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.882 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.882 [2024-11-19 12:31:19.931611] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:14.882 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.882 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e140a64c-3d45-41b9-b629-37f6a9e61a0f 00:11:14.882 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e140a64c-3d45-41b9-b629-37f6a9e61a0f ']' 00:11:14.882 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:14.882 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.882 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.882 [2024-11-19 12:31:19.963241] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:14.882 [2024-11-19 12:31:19.963273] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:14.882 [2024-11-19 12:31:19.963357] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.882 [2024-11-19 12:31:19.963447] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.882 [2024-11-19 12:31:19.963457] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:14.882 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.882 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:14.882 12:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.882 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.882 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.882 12:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.882 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:14.882 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:14.882 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:14.882 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:14.882 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.882 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.882 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.882 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:14.882 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:14.882 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.882 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.882 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.882 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:14.882 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:14.882 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.882 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.882 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.882 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:14.882 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:14.882 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.882 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.882 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.882 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:14.882 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.882 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:14.882 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.882 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.882 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:14.882 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:14.882 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:14.882 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:14.883 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:14.883 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:14.883 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:14.883 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:14.883 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:14.883 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.883 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.883 [2024-11-19 12:31:20.131003] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:14.883 [2024-11-19 12:31:20.132925] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:14.883 [2024-11-19 12:31:20.133021] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:14.883 [2024-11-19 12:31:20.133073] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:14.883 [2024-11-19 12:31:20.133156] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:14.883 [2024-11-19 12:31:20.133216] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:14.883 [2024-11-19 12:31:20.133240] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:14.883 [2024-11-19 12:31:20.133257] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:14.883 [2024-11-19 12:31:20.133271] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:14.883 [2024-11-19 12:31:20.133281] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:11:14.883 request: 00:11:14.883 { 00:11:14.883 "name": "raid_bdev1", 00:11:14.883 "raid_level": "raid1", 00:11:14.883 "base_bdevs": [ 00:11:14.883 "malloc1", 00:11:14.883 "malloc2", 00:11:14.883 "malloc3", 00:11:14.883 "malloc4" 00:11:14.883 ], 00:11:14.883 "superblock": false, 00:11:14.883 "method": "bdev_raid_create", 00:11:14.883 "req_id": 1 00:11:14.883 } 00:11:14.883 Got JSON-RPC error response 00:11:14.883 response: 00:11:14.883 { 00:11:14.883 "code": -17, 00:11:14.883 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:14.883 } 00:11:14.883 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:14.883 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:14.883 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:14.883 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:14.883 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:15.141 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.142 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.142 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.142 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:15.142 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.142 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:15.142 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:15.142 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:15.142 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.142 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.142 [2024-11-19 12:31:20.194883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:15.142 [2024-11-19 12:31:20.194993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.142 [2024-11-19 12:31:20.195032] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:15.142 [2024-11-19 12:31:20.195060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.142 [2024-11-19 12:31:20.197152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.142 [2024-11-19 12:31:20.197221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:15.142 [2024-11-19 12:31:20.197315] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:15.142 [2024-11-19 12:31:20.197373] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:15.142 pt1 00:11:15.142 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.142 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:15.142 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:15.142 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.142 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.142 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.142 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.142 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.142 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.142 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.142 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.142 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.142 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.142 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.142 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.142 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.142 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.142 "name": "raid_bdev1", 00:11:15.142 "uuid": "e140a64c-3d45-41b9-b629-37f6a9e61a0f", 00:11:15.142 "strip_size_kb": 0, 00:11:15.142 "state": "configuring", 00:11:15.142 "raid_level": "raid1", 00:11:15.142 "superblock": true, 00:11:15.142 "num_base_bdevs": 4, 00:11:15.142 "num_base_bdevs_discovered": 1, 00:11:15.142 "num_base_bdevs_operational": 4, 00:11:15.142 "base_bdevs_list": [ 00:11:15.142 { 00:11:15.142 "name": "pt1", 00:11:15.142 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:15.142 "is_configured": true, 00:11:15.142 "data_offset": 2048, 00:11:15.142 "data_size": 63488 00:11:15.142 }, 00:11:15.142 { 00:11:15.142 "name": null, 00:11:15.142 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:15.142 "is_configured": false, 00:11:15.142 "data_offset": 2048, 00:11:15.142 "data_size": 63488 00:11:15.142 }, 00:11:15.142 { 00:11:15.142 "name": null, 00:11:15.142 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:15.142 "is_configured": false, 00:11:15.142 "data_offset": 2048, 00:11:15.142 "data_size": 63488 00:11:15.142 }, 00:11:15.142 { 00:11:15.142 "name": null, 00:11:15.142 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:15.142 "is_configured": false, 00:11:15.142 "data_offset": 2048, 00:11:15.142 "data_size": 63488 00:11:15.142 } 00:11:15.142 ] 00:11:15.142 }' 00:11:15.142 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.142 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.405 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:15.405 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:15.405 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.405 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.405 [2024-11-19 12:31:20.614192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:15.405 [2024-11-19 12:31:20.614263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.405 [2024-11-19 12:31:20.614295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:15.405 [2024-11-19 12:31:20.614304] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.405 [2024-11-19 12:31:20.614732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.405 [2024-11-19 12:31:20.614780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:15.405 [2024-11-19 12:31:20.614864] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:15.405 [2024-11-19 12:31:20.614893] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:15.405 pt2 00:11:15.405 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.406 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:15.406 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.406 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.406 [2024-11-19 12:31:20.626170] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:15.406 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.406 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:15.406 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:15.406 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.406 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.406 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.406 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.406 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.406 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.406 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.406 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.406 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.406 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.406 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.406 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.406 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.673 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.673 "name": "raid_bdev1", 00:11:15.673 "uuid": "e140a64c-3d45-41b9-b629-37f6a9e61a0f", 00:11:15.673 "strip_size_kb": 0, 00:11:15.673 "state": "configuring", 00:11:15.673 "raid_level": "raid1", 00:11:15.673 "superblock": true, 00:11:15.673 "num_base_bdevs": 4, 00:11:15.673 "num_base_bdevs_discovered": 1, 00:11:15.673 "num_base_bdevs_operational": 4, 00:11:15.673 "base_bdevs_list": [ 00:11:15.674 { 00:11:15.674 "name": "pt1", 00:11:15.674 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:15.674 "is_configured": true, 00:11:15.674 "data_offset": 2048, 00:11:15.674 "data_size": 63488 00:11:15.674 }, 00:11:15.674 { 00:11:15.674 "name": null, 00:11:15.674 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:15.674 "is_configured": false, 00:11:15.674 "data_offset": 0, 00:11:15.674 "data_size": 63488 00:11:15.674 }, 00:11:15.674 { 00:11:15.674 "name": null, 00:11:15.674 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:15.674 "is_configured": false, 00:11:15.674 "data_offset": 2048, 00:11:15.674 "data_size": 63488 00:11:15.674 }, 00:11:15.674 { 00:11:15.674 "name": null, 00:11:15.674 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:15.674 "is_configured": false, 00:11:15.674 "data_offset": 2048, 00:11:15.674 "data_size": 63488 00:11:15.674 } 00:11:15.674 ] 00:11:15.674 }' 00:11:15.674 12:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.674 12:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.933 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:15.933 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:15.933 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:15.933 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.933 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.933 [2024-11-19 12:31:21.037489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:15.933 [2024-11-19 12:31:21.037646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.933 [2024-11-19 12:31:21.037685] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:15.933 [2024-11-19 12:31:21.037717] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.933 [2024-11-19 12:31:21.038161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.933 [2024-11-19 12:31:21.038229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:15.933 [2024-11-19 12:31:21.038339] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:15.933 [2024-11-19 12:31:21.038392] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:15.934 pt2 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.934 [2024-11-19 12:31:21.049433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:15.934 [2024-11-19 12:31:21.049594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.934 [2024-11-19 12:31:21.049637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:15.934 [2024-11-19 12:31:21.049678] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.934 [2024-11-19 12:31:21.050127] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.934 [2024-11-19 12:31:21.050195] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:15.934 [2024-11-19 12:31:21.050305] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:15.934 [2024-11-19 12:31:21.050357] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:15.934 pt3 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.934 [2024-11-19 12:31:21.061412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:15.934 [2024-11-19 12:31:21.061473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.934 [2024-11-19 12:31:21.061492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:15.934 [2024-11-19 12:31:21.061502] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.934 [2024-11-19 12:31:21.061893] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.934 [2024-11-19 12:31:21.061925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:15.934 [2024-11-19 12:31:21.061995] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:15.934 [2024-11-19 12:31:21.062018] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:15.934 [2024-11-19 12:31:21.062128] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:11:15.934 [2024-11-19 12:31:21.062143] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:15.934 [2024-11-19 12:31:21.062389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:15.934 [2024-11-19 12:31:21.062512] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:11:15.934 [2024-11-19 12:31:21.062522] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:11:15.934 [2024-11-19 12:31:21.062630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.934 pt4 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.934 "name": "raid_bdev1", 00:11:15.934 "uuid": "e140a64c-3d45-41b9-b629-37f6a9e61a0f", 00:11:15.934 "strip_size_kb": 0, 00:11:15.934 "state": "online", 00:11:15.934 "raid_level": "raid1", 00:11:15.934 "superblock": true, 00:11:15.934 "num_base_bdevs": 4, 00:11:15.934 "num_base_bdevs_discovered": 4, 00:11:15.934 "num_base_bdevs_operational": 4, 00:11:15.934 "base_bdevs_list": [ 00:11:15.934 { 00:11:15.934 "name": "pt1", 00:11:15.934 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:15.934 "is_configured": true, 00:11:15.934 "data_offset": 2048, 00:11:15.934 "data_size": 63488 00:11:15.934 }, 00:11:15.934 { 00:11:15.934 "name": "pt2", 00:11:15.934 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:15.934 "is_configured": true, 00:11:15.934 "data_offset": 2048, 00:11:15.934 "data_size": 63488 00:11:15.934 }, 00:11:15.934 { 00:11:15.934 "name": "pt3", 00:11:15.934 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:15.934 "is_configured": true, 00:11:15.934 "data_offset": 2048, 00:11:15.934 "data_size": 63488 00:11:15.934 }, 00:11:15.934 { 00:11:15.934 "name": "pt4", 00:11:15.934 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:15.934 "is_configured": true, 00:11:15.934 "data_offset": 2048, 00:11:15.934 "data_size": 63488 00:11:15.934 } 00:11:15.934 ] 00:11:15.934 }' 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.934 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.503 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:16.503 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:16.503 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:16.503 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:16.503 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:16.503 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:16.503 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:16.503 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.504 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.504 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:16.504 [2024-11-19 12:31:21.524963] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:16.504 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.504 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:16.504 "name": "raid_bdev1", 00:11:16.504 "aliases": [ 00:11:16.504 "e140a64c-3d45-41b9-b629-37f6a9e61a0f" 00:11:16.504 ], 00:11:16.504 "product_name": "Raid Volume", 00:11:16.504 "block_size": 512, 00:11:16.504 "num_blocks": 63488, 00:11:16.504 "uuid": "e140a64c-3d45-41b9-b629-37f6a9e61a0f", 00:11:16.504 "assigned_rate_limits": { 00:11:16.504 "rw_ios_per_sec": 0, 00:11:16.504 "rw_mbytes_per_sec": 0, 00:11:16.504 "r_mbytes_per_sec": 0, 00:11:16.504 "w_mbytes_per_sec": 0 00:11:16.504 }, 00:11:16.504 "claimed": false, 00:11:16.504 "zoned": false, 00:11:16.504 "supported_io_types": { 00:11:16.504 "read": true, 00:11:16.504 "write": true, 00:11:16.504 "unmap": false, 00:11:16.504 "flush": false, 00:11:16.504 "reset": true, 00:11:16.504 "nvme_admin": false, 00:11:16.504 "nvme_io": false, 00:11:16.504 "nvme_io_md": false, 00:11:16.504 "write_zeroes": true, 00:11:16.504 "zcopy": false, 00:11:16.504 "get_zone_info": false, 00:11:16.504 "zone_management": false, 00:11:16.504 "zone_append": false, 00:11:16.504 "compare": false, 00:11:16.504 "compare_and_write": false, 00:11:16.504 "abort": false, 00:11:16.504 "seek_hole": false, 00:11:16.504 "seek_data": false, 00:11:16.504 "copy": false, 00:11:16.504 "nvme_iov_md": false 00:11:16.504 }, 00:11:16.504 "memory_domains": [ 00:11:16.504 { 00:11:16.504 "dma_device_id": "system", 00:11:16.504 "dma_device_type": 1 00:11:16.504 }, 00:11:16.504 { 00:11:16.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.504 "dma_device_type": 2 00:11:16.504 }, 00:11:16.504 { 00:11:16.504 "dma_device_id": "system", 00:11:16.504 "dma_device_type": 1 00:11:16.504 }, 00:11:16.504 { 00:11:16.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.504 "dma_device_type": 2 00:11:16.504 }, 00:11:16.504 { 00:11:16.504 "dma_device_id": "system", 00:11:16.504 "dma_device_type": 1 00:11:16.504 }, 00:11:16.504 { 00:11:16.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.504 "dma_device_type": 2 00:11:16.504 }, 00:11:16.504 { 00:11:16.504 "dma_device_id": "system", 00:11:16.504 "dma_device_type": 1 00:11:16.504 }, 00:11:16.504 { 00:11:16.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.504 "dma_device_type": 2 00:11:16.504 } 00:11:16.504 ], 00:11:16.504 "driver_specific": { 00:11:16.504 "raid": { 00:11:16.504 "uuid": "e140a64c-3d45-41b9-b629-37f6a9e61a0f", 00:11:16.504 "strip_size_kb": 0, 00:11:16.504 "state": "online", 00:11:16.504 "raid_level": "raid1", 00:11:16.504 "superblock": true, 00:11:16.504 "num_base_bdevs": 4, 00:11:16.504 "num_base_bdevs_discovered": 4, 00:11:16.504 "num_base_bdevs_operational": 4, 00:11:16.504 "base_bdevs_list": [ 00:11:16.504 { 00:11:16.504 "name": "pt1", 00:11:16.504 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:16.504 "is_configured": true, 00:11:16.504 "data_offset": 2048, 00:11:16.504 "data_size": 63488 00:11:16.504 }, 00:11:16.504 { 00:11:16.504 "name": "pt2", 00:11:16.504 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:16.504 "is_configured": true, 00:11:16.504 "data_offset": 2048, 00:11:16.504 "data_size": 63488 00:11:16.504 }, 00:11:16.504 { 00:11:16.504 "name": "pt3", 00:11:16.504 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:16.504 "is_configured": true, 00:11:16.504 "data_offset": 2048, 00:11:16.504 "data_size": 63488 00:11:16.504 }, 00:11:16.504 { 00:11:16.504 "name": "pt4", 00:11:16.504 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:16.504 "is_configured": true, 00:11:16.504 "data_offset": 2048, 00:11:16.504 "data_size": 63488 00:11:16.504 } 00:11:16.504 ] 00:11:16.504 } 00:11:16.504 } 00:11:16.504 }' 00:11:16.504 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:16.504 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:16.504 pt2 00:11:16.504 pt3 00:11:16.504 pt4' 00:11:16.504 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.504 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:16.504 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.504 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:16.504 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.504 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.504 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.504 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.504 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.504 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.504 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.505 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:16.505 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.505 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.505 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.505 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.505 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.505 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.505 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.505 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:16.505 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.505 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.505 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.505 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.764 [2024-11-19 12:31:21.840325] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e140a64c-3d45-41b9-b629-37f6a9e61a0f '!=' e140a64c-3d45-41b9-b629-37f6a9e61a0f ']' 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.764 [2024-11-19 12:31:21.880018] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.764 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.764 "name": "raid_bdev1", 00:11:16.764 "uuid": "e140a64c-3d45-41b9-b629-37f6a9e61a0f", 00:11:16.764 "strip_size_kb": 0, 00:11:16.764 "state": "online", 00:11:16.764 "raid_level": "raid1", 00:11:16.764 "superblock": true, 00:11:16.764 "num_base_bdevs": 4, 00:11:16.764 "num_base_bdevs_discovered": 3, 00:11:16.764 "num_base_bdevs_operational": 3, 00:11:16.764 "base_bdevs_list": [ 00:11:16.764 { 00:11:16.764 "name": null, 00:11:16.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.764 "is_configured": false, 00:11:16.764 "data_offset": 0, 00:11:16.764 "data_size": 63488 00:11:16.764 }, 00:11:16.765 { 00:11:16.765 "name": "pt2", 00:11:16.765 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:16.765 "is_configured": true, 00:11:16.765 "data_offset": 2048, 00:11:16.765 "data_size": 63488 00:11:16.765 }, 00:11:16.765 { 00:11:16.765 "name": "pt3", 00:11:16.765 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:16.765 "is_configured": true, 00:11:16.765 "data_offset": 2048, 00:11:16.765 "data_size": 63488 00:11:16.765 }, 00:11:16.765 { 00:11:16.765 "name": "pt4", 00:11:16.765 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:16.765 "is_configured": true, 00:11:16.765 "data_offset": 2048, 00:11:16.765 "data_size": 63488 00:11:16.765 } 00:11:16.765 ] 00:11:16.765 }' 00:11:16.765 12:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.765 12:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.334 [2024-11-19 12:31:22.295352] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:17.334 [2024-11-19 12:31:22.295451] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:17.334 [2024-11-19 12:31:22.295558] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:17.334 [2024-11-19 12:31:22.295651] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:17.334 [2024-11-19 12:31:22.295696] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.334 [2024-11-19 12:31:22.375221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:17.334 [2024-11-19 12:31:22.375296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.334 [2024-11-19 12:31:22.375320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:17.334 [2024-11-19 12:31:22.375335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.334 [2024-11-19 12:31:22.377936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.334 [2024-11-19 12:31:22.378040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:17.334 [2024-11-19 12:31:22.378146] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:17.334 [2024-11-19 12:31:22.378193] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:17.334 pt2 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.334 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.334 "name": "raid_bdev1", 00:11:17.334 "uuid": "e140a64c-3d45-41b9-b629-37f6a9e61a0f", 00:11:17.334 "strip_size_kb": 0, 00:11:17.334 "state": "configuring", 00:11:17.334 "raid_level": "raid1", 00:11:17.334 "superblock": true, 00:11:17.334 "num_base_bdevs": 4, 00:11:17.334 "num_base_bdevs_discovered": 1, 00:11:17.334 "num_base_bdevs_operational": 3, 00:11:17.334 "base_bdevs_list": [ 00:11:17.334 { 00:11:17.334 "name": null, 00:11:17.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.335 "is_configured": false, 00:11:17.335 "data_offset": 2048, 00:11:17.335 "data_size": 63488 00:11:17.335 }, 00:11:17.335 { 00:11:17.335 "name": "pt2", 00:11:17.335 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:17.335 "is_configured": true, 00:11:17.335 "data_offset": 2048, 00:11:17.335 "data_size": 63488 00:11:17.335 }, 00:11:17.335 { 00:11:17.335 "name": null, 00:11:17.335 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:17.335 "is_configured": false, 00:11:17.335 "data_offset": 2048, 00:11:17.335 "data_size": 63488 00:11:17.335 }, 00:11:17.335 { 00:11:17.335 "name": null, 00:11:17.335 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:17.335 "is_configured": false, 00:11:17.335 "data_offset": 2048, 00:11:17.335 "data_size": 63488 00:11:17.335 } 00:11:17.335 ] 00:11:17.335 }' 00:11:17.335 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.335 12:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.594 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:17.594 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:17.594 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:17.594 12:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.594 12:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.595 [2024-11-19 12:31:22.814562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:17.595 [2024-11-19 12:31:22.814693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.595 [2024-11-19 12:31:22.814762] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:17.595 [2024-11-19 12:31:22.814811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.595 [2024-11-19 12:31:22.815244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.595 [2024-11-19 12:31:22.815310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:17.595 [2024-11-19 12:31:22.815407] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:17.595 [2024-11-19 12:31:22.815455] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:17.595 pt3 00:11:17.595 12:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.595 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:17.595 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.595 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.595 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.595 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.595 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.595 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.595 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.595 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.595 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.595 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.595 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.595 12:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.595 12:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.595 12:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.853 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.853 "name": "raid_bdev1", 00:11:17.853 "uuid": "e140a64c-3d45-41b9-b629-37f6a9e61a0f", 00:11:17.853 "strip_size_kb": 0, 00:11:17.853 "state": "configuring", 00:11:17.853 "raid_level": "raid1", 00:11:17.853 "superblock": true, 00:11:17.853 "num_base_bdevs": 4, 00:11:17.853 "num_base_bdevs_discovered": 2, 00:11:17.853 "num_base_bdevs_operational": 3, 00:11:17.853 "base_bdevs_list": [ 00:11:17.853 { 00:11:17.853 "name": null, 00:11:17.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.853 "is_configured": false, 00:11:17.853 "data_offset": 2048, 00:11:17.853 "data_size": 63488 00:11:17.853 }, 00:11:17.853 { 00:11:17.853 "name": "pt2", 00:11:17.853 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:17.853 "is_configured": true, 00:11:17.853 "data_offset": 2048, 00:11:17.853 "data_size": 63488 00:11:17.853 }, 00:11:17.853 { 00:11:17.853 "name": "pt3", 00:11:17.853 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:17.853 "is_configured": true, 00:11:17.853 "data_offset": 2048, 00:11:17.853 "data_size": 63488 00:11:17.853 }, 00:11:17.853 { 00:11:17.853 "name": null, 00:11:17.853 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:17.853 "is_configured": false, 00:11:17.853 "data_offset": 2048, 00:11:17.853 "data_size": 63488 00:11:17.853 } 00:11:17.853 ] 00:11:17.853 }' 00:11:17.853 12:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.853 12:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.112 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:18.112 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:18.112 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:11:18.112 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:18.112 12:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.112 12:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.112 [2024-11-19 12:31:23.241845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:18.113 [2024-11-19 12:31:23.241926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.113 [2024-11-19 12:31:23.241950] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:11:18.113 [2024-11-19 12:31:23.241961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.113 [2024-11-19 12:31:23.242375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.113 [2024-11-19 12:31:23.242394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:18.113 [2024-11-19 12:31:23.242470] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:18.113 [2024-11-19 12:31:23.242503] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:18.113 [2024-11-19 12:31:23.242611] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:11:18.113 [2024-11-19 12:31:23.242624] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:18.113 [2024-11-19 12:31:23.242920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:18.113 [2024-11-19 12:31:23.243075] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:11:18.113 [2024-11-19 12:31:23.243087] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:11:18.113 [2024-11-19 12:31:23.243203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.113 pt4 00:11:18.113 12:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.113 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:18.113 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.113 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.113 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.113 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.113 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:18.113 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.113 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.113 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.113 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.113 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.113 12:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.113 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.113 12:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.113 12:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.113 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.113 "name": "raid_bdev1", 00:11:18.113 "uuid": "e140a64c-3d45-41b9-b629-37f6a9e61a0f", 00:11:18.113 "strip_size_kb": 0, 00:11:18.113 "state": "online", 00:11:18.113 "raid_level": "raid1", 00:11:18.113 "superblock": true, 00:11:18.113 "num_base_bdevs": 4, 00:11:18.113 "num_base_bdevs_discovered": 3, 00:11:18.113 "num_base_bdevs_operational": 3, 00:11:18.113 "base_bdevs_list": [ 00:11:18.113 { 00:11:18.113 "name": null, 00:11:18.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.113 "is_configured": false, 00:11:18.113 "data_offset": 2048, 00:11:18.113 "data_size": 63488 00:11:18.113 }, 00:11:18.113 { 00:11:18.113 "name": "pt2", 00:11:18.113 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:18.113 "is_configured": true, 00:11:18.113 "data_offset": 2048, 00:11:18.113 "data_size": 63488 00:11:18.113 }, 00:11:18.113 { 00:11:18.113 "name": "pt3", 00:11:18.113 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:18.113 "is_configured": true, 00:11:18.113 "data_offset": 2048, 00:11:18.113 "data_size": 63488 00:11:18.113 }, 00:11:18.113 { 00:11:18.113 "name": "pt4", 00:11:18.113 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:18.113 "is_configured": true, 00:11:18.113 "data_offset": 2048, 00:11:18.113 "data_size": 63488 00:11:18.113 } 00:11:18.113 ] 00:11:18.113 }' 00:11:18.113 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.113 12:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.681 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:18.681 12:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.681 12:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.681 [2024-11-19 12:31:23.641155] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:18.681 [2024-11-19 12:31:23.641275] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:18.681 [2024-11-19 12:31:23.641374] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:18.681 [2024-11-19 12:31:23.641487] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:18.681 [2024-11-19 12:31:23.641538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:11:18.681 12:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.681 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.681 12:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.681 12:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.681 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:18.681 12:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.681 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:18.681 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:18.681 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:11:18.681 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:11:18.681 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:11:18.681 12:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.682 12:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.682 12:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.682 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:18.682 12:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.682 12:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.682 [2024-11-19 12:31:23.717024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:18.682 [2024-11-19 12:31:23.717128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.682 [2024-11-19 12:31:23.717187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:11:18.682 [2024-11-19 12:31:23.717215] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.682 [2024-11-19 12:31:23.719423] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.682 [2024-11-19 12:31:23.719498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:18.682 [2024-11-19 12:31:23.719596] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:18.682 [2024-11-19 12:31:23.719661] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:18.682 [2024-11-19 12:31:23.719828] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:18.682 [2024-11-19 12:31:23.719884] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:18.682 [2024-11-19 12:31:23.719924] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:11:18.682 [2024-11-19 12:31:23.720005] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:18.682 [2024-11-19 12:31:23.720132] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:18.682 pt1 00:11:18.682 12:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.682 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:11:18.682 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:18.682 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.682 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.682 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.682 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.682 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:18.682 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.682 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.682 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.682 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.682 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.682 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.682 12:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.682 12:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.682 12:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.682 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.682 "name": "raid_bdev1", 00:11:18.682 "uuid": "e140a64c-3d45-41b9-b629-37f6a9e61a0f", 00:11:18.682 "strip_size_kb": 0, 00:11:18.682 "state": "configuring", 00:11:18.682 "raid_level": "raid1", 00:11:18.682 "superblock": true, 00:11:18.682 "num_base_bdevs": 4, 00:11:18.682 "num_base_bdevs_discovered": 2, 00:11:18.682 "num_base_bdevs_operational": 3, 00:11:18.682 "base_bdevs_list": [ 00:11:18.682 { 00:11:18.682 "name": null, 00:11:18.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.682 "is_configured": false, 00:11:18.682 "data_offset": 2048, 00:11:18.682 "data_size": 63488 00:11:18.682 }, 00:11:18.682 { 00:11:18.682 "name": "pt2", 00:11:18.682 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:18.682 "is_configured": true, 00:11:18.682 "data_offset": 2048, 00:11:18.682 "data_size": 63488 00:11:18.682 }, 00:11:18.682 { 00:11:18.682 "name": "pt3", 00:11:18.682 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:18.682 "is_configured": true, 00:11:18.682 "data_offset": 2048, 00:11:18.682 "data_size": 63488 00:11:18.682 }, 00:11:18.682 { 00:11:18.682 "name": null, 00:11:18.682 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:18.682 "is_configured": false, 00:11:18.682 "data_offset": 2048, 00:11:18.682 "data_size": 63488 00:11:18.682 } 00:11:18.682 ] 00:11:18.682 }' 00:11:18.682 12:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.682 12:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.941 12:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:18.941 12:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.941 12:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.941 12:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:18.941 12:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.941 12:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:18.941 12:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:18.941 12:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.941 12:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.941 [2024-11-19 12:31:24.192208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:18.941 [2024-11-19 12:31:24.192323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.941 [2024-11-19 12:31:24.192348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:18.941 [2024-11-19 12:31:24.192360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.941 [2024-11-19 12:31:24.192787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.941 [2024-11-19 12:31:24.192809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:18.941 [2024-11-19 12:31:24.192880] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:18.941 [2024-11-19 12:31:24.192905] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:18.941 [2024-11-19 12:31:24.193002] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:11:18.941 [2024-11-19 12:31:24.193015] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:18.941 [2024-11-19 12:31:24.193248] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:18.941 [2024-11-19 12:31:24.193373] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:11:18.941 [2024-11-19 12:31:24.193381] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:11:18.941 [2024-11-19 12:31:24.193487] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.941 pt4 00:11:18.941 12:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.941 12:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:18.942 12:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.942 12:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.942 12:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.942 12:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.942 12:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:18.942 12:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.942 12:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.942 12:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.942 12:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.201 12:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.201 12:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.201 12:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.201 12:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.201 12:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.201 12:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.201 "name": "raid_bdev1", 00:11:19.201 "uuid": "e140a64c-3d45-41b9-b629-37f6a9e61a0f", 00:11:19.201 "strip_size_kb": 0, 00:11:19.201 "state": "online", 00:11:19.201 "raid_level": "raid1", 00:11:19.201 "superblock": true, 00:11:19.201 "num_base_bdevs": 4, 00:11:19.201 "num_base_bdevs_discovered": 3, 00:11:19.201 "num_base_bdevs_operational": 3, 00:11:19.201 "base_bdevs_list": [ 00:11:19.201 { 00:11:19.201 "name": null, 00:11:19.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.201 "is_configured": false, 00:11:19.201 "data_offset": 2048, 00:11:19.201 "data_size": 63488 00:11:19.201 }, 00:11:19.201 { 00:11:19.201 "name": "pt2", 00:11:19.201 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:19.201 "is_configured": true, 00:11:19.201 "data_offset": 2048, 00:11:19.201 "data_size": 63488 00:11:19.201 }, 00:11:19.201 { 00:11:19.201 "name": "pt3", 00:11:19.201 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:19.201 "is_configured": true, 00:11:19.201 "data_offset": 2048, 00:11:19.201 "data_size": 63488 00:11:19.201 }, 00:11:19.201 { 00:11:19.201 "name": "pt4", 00:11:19.201 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:19.201 "is_configured": true, 00:11:19.201 "data_offset": 2048, 00:11:19.201 "data_size": 63488 00:11:19.201 } 00:11:19.201 ] 00:11:19.201 }' 00:11:19.201 12:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.201 12:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.461 12:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:19.461 12:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:19.461 12:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.461 12:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.461 12:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.461 12:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:19.461 12:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:19.461 12:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.461 12:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.461 12:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:19.461 [2024-11-19 12:31:24.623771] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:19.462 12:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.462 12:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' e140a64c-3d45-41b9-b629-37f6a9e61a0f '!=' e140a64c-3d45-41b9-b629-37f6a9e61a0f ']' 00:11:19.462 12:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 85469 00:11:19.462 12:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 85469 ']' 00:11:19.462 12:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 85469 00:11:19.462 12:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:19.462 12:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:19.462 12:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85469 00:11:19.462 12:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:19.462 12:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:19.462 killing process with pid 85469 00:11:19.462 12:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85469' 00:11:19.462 12:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 85469 00:11:19.462 [2024-11-19 12:31:24.690111] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:19.462 [2024-11-19 12:31:24.690203] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.462 [2024-11-19 12:31:24.690283] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:19.462 12:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 85469 00:11:19.462 [2024-11-19 12:31:24.690292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:11:19.722 [2024-11-19 12:31:24.734524] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:19.981 12:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:19.981 00:11:19.981 real 0m6.901s 00:11:19.981 user 0m11.486s 00:11:19.981 sys 0m1.528s 00:11:19.981 12:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:19.981 ************************************ 00:11:19.981 END TEST raid_superblock_test 00:11:19.981 ************************************ 00:11:19.981 12:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.981 12:31:25 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:11:19.981 12:31:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:19.981 12:31:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:19.981 12:31:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:19.981 ************************************ 00:11:19.981 START TEST raid_read_error_test 00:11:19.981 ************************************ 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8EL4cy1979 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85934 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85934 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 85934 ']' 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:19.981 12:31:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.981 [2024-11-19 12:31:25.168760] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:19.981 [2024-11-19 12:31:25.168897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85934 ] 00:11:20.241 [2024-11-19 12:31:25.335393] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.241 [2024-11-19 12:31:25.382302] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.241 [2024-11-19 12:31:25.426339] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.241 [2024-11-19 12:31:25.426383] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.810 12:31:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:20.810 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:20.810 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:20.810 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:20.810 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.810 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.810 BaseBdev1_malloc 00:11:20.810 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.810 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:20.810 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.810 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.810 true 00:11:20.810 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.811 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:20.811 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.811 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.811 [2024-11-19 12:31:26.037758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:20.811 [2024-11-19 12:31:26.037896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.811 [2024-11-19 12:31:26.037946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:20.811 [2024-11-19 12:31:26.037955] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.811 [2024-11-19 12:31:26.040189] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.811 [2024-11-19 12:31:26.040231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:20.811 BaseBdev1 00:11:20.811 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.811 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:20.811 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:20.811 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.811 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.070 BaseBdev2_malloc 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.070 true 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.070 [2024-11-19 12:31:26.087279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:21.070 [2024-11-19 12:31:26.087395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.070 [2024-11-19 12:31:26.087418] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:21.070 [2024-11-19 12:31:26.087426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.070 [2024-11-19 12:31:26.089459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.070 [2024-11-19 12:31:26.089495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:21.070 BaseBdev2 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.070 BaseBdev3_malloc 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.070 true 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.070 [2024-11-19 12:31:26.128068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:21.070 [2024-11-19 12:31:26.128120] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.070 [2024-11-19 12:31:26.128139] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:21.070 [2024-11-19 12:31:26.128147] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.070 [2024-11-19 12:31:26.130153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.070 [2024-11-19 12:31:26.130249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:21.070 BaseBdev3 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.070 BaseBdev4_malloc 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.070 true 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.070 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.070 [2024-11-19 12:31:26.168783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:21.071 [2024-11-19 12:31:26.168835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.071 [2024-11-19 12:31:26.168858] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:21.071 [2024-11-19 12:31:26.168866] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.071 [2024-11-19 12:31:26.170885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.071 [2024-11-19 12:31:26.170921] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:21.071 BaseBdev4 00:11:21.071 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.071 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:21.071 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.071 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.071 [2024-11-19 12:31:26.180813] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:21.071 [2024-11-19 12:31:26.182623] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:21.071 [2024-11-19 12:31:26.182716] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:21.071 [2024-11-19 12:31:26.182796] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:21.071 [2024-11-19 12:31:26.183002] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:11:21.071 [2024-11-19 12:31:26.183020] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:21.071 [2024-11-19 12:31:26.183297] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:21.071 [2024-11-19 12:31:26.183434] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:11:21.071 [2024-11-19 12:31:26.183449] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:11:21.071 [2024-11-19 12:31:26.183571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.071 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.071 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:21.071 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:21.071 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.071 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.071 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.071 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.071 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.071 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.071 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.071 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.071 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.071 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.071 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.071 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.071 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.071 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.071 "name": "raid_bdev1", 00:11:21.071 "uuid": "6478e225-388f-49b2-aa4a-a210d5eff533", 00:11:21.071 "strip_size_kb": 0, 00:11:21.071 "state": "online", 00:11:21.071 "raid_level": "raid1", 00:11:21.071 "superblock": true, 00:11:21.071 "num_base_bdevs": 4, 00:11:21.071 "num_base_bdevs_discovered": 4, 00:11:21.071 "num_base_bdevs_operational": 4, 00:11:21.071 "base_bdevs_list": [ 00:11:21.071 { 00:11:21.071 "name": "BaseBdev1", 00:11:21.071 "uuid": "9ab61898-a551-5e91-8edb-9d95f627249a", 00:11:21.071 "is_configured": true, 00:11:21.071 "data_offset": 2048, 00:11:21.071 "data_size": 63488 00:11:21.071 }, 00:11:21.071 { 00:11:21.071 "name": "BaseBdev2", 00:11:21.071 "uuid": "cb69b264-2431-5c05-8cd8-4e435af9adfd", 00:11:21.071 "is_configured": true, 00:11:21.071 "data_offset": 2048, 00:11:21.071 "data_size": 63488 00:11:21.071 }, 00:11:21.071 { 00:11:21.071 "name": "BaseBdev3", 00:11:21.071 "uuid": "0c5a28cf-52a2-56ef-822e-6ae39700385b", 00:11:21.071 "is_configured": true, 00:11:21.071 "data_offset": 2048, 00:11:21.071 "data_size": 63488 00:11:21.071 }, 00:11:21.071 { 00:11:21.071 "name": "BaseBdev4", 00:11:21.071 "uuid": "7d01324c-31c2-5c79-8b3b-28d9f807b343", 00:11:21.071 "is_configured": true, 00:11:21.071 "data_offset": 2048, 00:11:21.071 "data_size": 63488 00:11:21.071 } 00:11:21.071 ] 00:11:21.071 }' 00:11:21.071 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.071 12:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.641 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:21.641 12:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:21.641 [2024-11-19 12:31:26.732326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:22.601 12:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:22.601 12:31:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.601 12:31:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.601 12:31:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.601 12:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:22.601 12:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:22.601 12:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:22.601 12:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:22.601 12:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:22.601 12:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:22.601 12:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.601 12:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.601 12:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.601 12:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.601 12:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.601 12:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.602 12:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.602 12:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.602 12:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.602 12:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.602 12:31:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.602 12:31:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.602 12:31:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.602 12:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.602 "name": "raid_bdev1", 00:11:22.602 "uuid": "6478e225-388f-49b2-aa4a-a210d5eff533", 00:11:22.602 "strip_size_kb": 0, 00:11:22.602 "state": "online", 00:11:22.602 "raid_level": "raid1", 00:11:22.602 "superblock": true, 00:11:22.602 "num_base_bdevs": 4, 00:11:22.602 "num_base_bdevs_discovered": 4, 00:11:22.602 "num_base_bdevs_operational": 4, 00:11:22.602 "base_bdevs_list": [ 00:11:22.602 { 00:11:22.602 "name": "BaseBdev1", 00:11:22.602 "uuid": "9ab61898-a551-5e91-8edb-9d95f627249a", 00:11:22.602 "is_configured": true, 00:11:22.602 "data_offset": 2048, 00:11:22.602 "data_size": 63488 00:11:22.602 }, 00:11:22.602 { 00:11:22.602 "name": "BaseBdev2", 00:11:22.602 "uuid": "cb69b264-2431-5c05-8cd8-4e435af9adfd", 00:11:22.602 "is_configured": true, 00:11:22.602 "data_offset": 2048, 00:11:22.602 "data_size": 63488 00:11:22.602 }, 00:11:22.602 { 00:11:22.602 "name": "BaseBdev3", 00:11:22.602 "uuid": "0c5a28cf-52a2-56ef-822e-6ae39700385b", 00:11:22.602 "is_configured": true, 00:11:22.602 "data_offset": 2048, 00:11:22.602 "data_size": 63488 00:11:22.602 }, 00:11:22.602 { 00:11:22.602 "name": "BaseBdev4", 00:11:22.602 "uuid": "7d01324c-31c2-5c79-8b3b-28d9f807b343", 00:11:22.602 "is_configured": true, 00:11:22.602 "data_offset": 2048, 00:11:22.602 "data_size": 63488 00:11:22.602 } 00:11:22.602 ] 00:11:22.602 }' 00:11:22.602 12:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.602 12:31:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.861 12:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:22.861 12:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.861 12:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.861 [2024-11-19 12:31:28.086641] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:22.861 [2024-11-19 12:31:28.086682] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:22.861 [2024-11-19 12:31:28.089311] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:22.861 [2024-11-19 12:31:28.089363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.861 [2024-11-19 12:31:28.089481] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:22.861 [2024-11-19 12:31:28.089492] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:11:22.861 { 00:11:22.861 "results": [ 00:11:22.861 { 00:11:22.861 "job": "raid_bdev1", 00:11:22.861 "core_mask": "0x1", 00:11:22.861 "workload": "randrw", 00:11:22.861 "percentage": 50, 00:11:22.861 "status": "finished", 00:11:22.861 "queue_depth": 1, 00:11:22.861 "io_size": 131072, 00:11:22.861 "runtime": 1.355162, 00:11:22.861 "iops": 11484.235833059074, 00:11:22.861 "mibps": 1435.5294791323843, 00:11:22.861 "io_failed": 0, 00:11:22.861 "io_timeout": 0, 00:11:22.861 "avg_latency_us": 84.53152356936604, 00:11:22.861 "min_latency_us": 22.246288209606988, 00:11:22.861 "max_latency_us": 1523.926637554585 00:11:22.861 } 00:11:22.861 ], 00:11:22.861 "core_count": 1 00:11:22.861 } 00:11:22.861 12:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.861 12:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85934 00:11:22.861 12:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 85934 ']' 00:11:22.861 12:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 85934 00:11:22.861 12:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:22.862 12:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:22.862 12:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85934 00:11:23.121 12:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:23.121 killing process with pid 85934 00:11:23.121 12:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:23.121 12:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85934' 00:11:23.121 12:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 85934 00:11:23.121 [2024-11-19 12:31:28.131583] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:23.121 12:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 85934 00:11:23.121 [2024-11-19 12:31:28.166940] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:23.381 12:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8EL4cy1979 00:11:23.381 12:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:23.381 12:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:23.381 12:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:23.381 12:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:23.381 ************************************ 00:11:23.381 END TEST raid_read_error_test 00:11:23.381 ************************************ 00:11:23.381 12:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:23.381 12:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:23.381 12:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:23.381 00:11:23.381 real 0m3.359s 00:11:23.381 user 0m4.205s 00:11:23.381 sys 0m0.599s 00:11:23.381 12:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:23.381 12:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.381 12:31:28 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:11:23.381 12:31:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:23.381 12:31:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:23.381 12:31:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:23.381 ************************************ 00:11:23.381 START TEST raid_write_error_test 00:11:23.381 ************************************ 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.DQWn4Yt96J 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=86073 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 86073 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 86073 ']' 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:23.381 12:31:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.381 [2024-11-19 12:31:28.599557] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:23.381 [2024-11-19 12:31:28.599771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86073 ] 00:11:23.641 [2024-11-19 12:31:28.765108] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.641 [2024-11-19 12:31:28.811285] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.641 [2024-11-19 12:31:28.855077] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.641 [2024-11-19 12:31:28.855114] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.211 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:24.211 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:24.211 12:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:24.211 12:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:24.211 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.211 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.211 BaseBdev1_malloc 00:11:24.211 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.211 12:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:24.211 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.211 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.211 true 00:11:24.211 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.211 12:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:24.211 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.211 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.211 [2024-11-19 12:31:29.446684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:24.211 [2024-11-19 12:31:29.446778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.211 [2024-11-19 12:31:29.446805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:24.211 [2024-11-19 12:31:29.446823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.211 [2024-11-19 12:31:29.448862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.211 [2024-11-19 12:31:29.448907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:24.211 BaseBdev1 00:11:24.211 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.211 12:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:24.211 12:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:24.211 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.211 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.471 BaseBdev2_malloc 00:11:24.471 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.471 12:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:24.471 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.471 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.471 true 00:11:24.471 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.471 12:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:24.471 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.471 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.471 [2024-11-19 12:31:29.498525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:24.471 [2024-11-19 12:31:29.498659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.471 [2024-11-19 12:31:29.498681] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:24.471 [2024-11-19 12:31:29.498691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.471 [2024-11-19 12:31:29.500655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.471 [2024-11-19 12:31:29.500695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:24.471 BaseBdev2 00:11:24.471 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.471 12:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:24.471 12:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:24.471 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.471 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.471 BaseBdev3_malloc 00:11:24.471 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.471 12:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:24.471 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.471 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.471 true 00:11:24.471 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.471 12:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:24.471 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.471 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.471 [2024-11-19 12:31:29.539244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:24.471 [2024-11-19 12:31:29.539296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.471 [2024-11-19 12:31:29.539314] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:24.471 [2024-11-19 12:31:29.539323] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.471 [2024-11-19 12:31:29.541366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.472 [2024-11-19 12:31:29.541440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:24.472 BaseBdev3 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.472 BaseBdev4_malloc 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.472 true 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.472 [2024-11-19 12:31:29.580062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:24.472 [2024-11-19 12:31:29.580159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.472 [2024-11-19 12:31:29.580184] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:24.472 [2024-11-19 12:31:29.580193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.472 [2024-11-19 12:31:29.582213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.472 [2024-11-19 12:31:29.582250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:24.472 BaseBdev4 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.472 [2024-11-19 12:31:29.592103] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:24.472 [2024-11-19 12:31:29.593898] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:24.472 [2024-11-19 12:31:29.593983] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:24.472 [2024-11-19 12:31:29.594035] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:24.472 [2024-11-19 12:31:29.594228] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:11:24.472 [2024-11-19 12:31:29.594240] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:24.472 [2024-11-19 12:31:29.594484] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:24.472 [2024-11-19 12:31:29.594625] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:11:24.472 [2024-11-19 12:31:29.594642] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:11:24.472 [2024-11-19 12:31:29.594785] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.472 "name": "raid_bdev1", 00:11:24.472 "uuid": "fdbd6050-53b6-4a35-aa5b-5fbb16457cce", 00:11:24.472 "strip_size_kb": 0, 00:11:24.472 "state": "online", 00:11:24.472 "raid_level": "raid1", 00:11:24.472 "superblock": true, 00:11:24.472 "num_base_bdevs": 4, 00:11:24.472 "num_base_bdevs_discovered": 4, 00:11:24.472 "num_base_bdevs_operational": 4, 00:11:24.472 "base_bdevs_list": [ 00:11:24.472 { 00:11:24.472 "name": "BaseBdev1", 00:11:24.472 "uuid": "2e33fe67-f409-5052-a01e-52397f9a32ee", 00:11:24.472 "is_configured": true, 00:11:24.472 "data_offset": 2048, 00:11:24.472 "data_size": 63488 00:11:24.472 }, 00:11:24.472 { 00:11:24.472 "name": "BaseBdev2", 00:11:24.472 "uuid": "f542270b-6970-55b3-b8cd-653dff447719", 00:11:24.472 "is_configured": true, 00:11:24.472 "data_offset": 2048, 00:11:24.472 "data_size": 63488 00:11:24.472 }, 00:11:24.472 { 00:11:24.472 "name": "BaseBdev3", 00:11:24.472 "uuid": "25552a5d-8b6c-54f2-980d-dc010f6c36dd", 00:11:24.472 "is_configured": true, 00:11:24.472 "data_offset": 2048, 00:11:24.472 "data_size": 63488 00:11:24.472 }, 00:11:24.472 { 00:11:24.472 "name": "BaseBdev4", 00:11:24.472 "uuid": "b15ac344-f6d5-5d7f-98ec-d9630e089d94", 00:11:24.472 "is_configured": true, 00:11:24.472 "data_offset": 2048, 00:11:24.472 "data_size": 63488 00:11:24.472 } 00:11:24.472 ] 00:11:24.472 }' 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.472 12:31:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.041 12:31:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:25.041 12:31:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:25.041 [2024-11-19 12:31:30.171446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:25.981 12:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:25.981 12:31:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.981 12:31:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.981 [2024-11-19 12:31:31.086363] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:25.981 [2024-11-19 12:31:31.086534] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:25.981 [2024-11-19 12:31:31.086843] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:11:25.981 12:31:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.981 12:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:25.981 12:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:25.981 12:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:25.982 12:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:25.982 12:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:25.982 12:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.982 12:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.982 12:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.982 12:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.982 12:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.982 12:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.982 12:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.982 12:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.982 12:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.982 12:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.982 12:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.982 12:31:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.982 12:31:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.982 12:31:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.982 12:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.982 "name": "raid_bdev1", 00:11:25.982 "uuid": "fdbd6050-53b6-4a35-aa5b-5fbb16457cce", 00:11:25.982 "strip_size_kb": 0, 00:11:25.982 "state": "online", 00:11:25.982 "raid_level": "raid1", 00:11:25.982 "superblock": true, 00:11:25.982 "num_base_bdevs": 4, 00:11:25.982 "num_base_bdevs_discovered": 3, 00:11:25.982 "num_base_bdevs_operational": 3, 00:11:25.982 "base_bdevs_list": [ 00:11:25.982 { 00:11:25.982 "name": null, 00:11:25.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.982 "is_configured": false, 00:11:25.982 "data_offset": 0, 00:11:25.982 "data_size": 63488 00:11:25.982 }, 00:11:25.982 { 00:11:25.982 "name": "BaseBdev2", 00:11:25.982 "uuid": "f542270b-6970-55b3-b8cd-653dff447719", 00:11:25.982 "is_configured": true, 00:11:25.982 "data_offset": 2048, 00:11:25.982 "data_size": 63488 00:11:25.982 }, 00:11:25.982 { 00:11:25.982 "name": "BaseBdev3", 00:11:25.982 "uuid": "25552a5d-8b6c-54f2-980d-dc010f6c36dd", 00:11:25.982 "is_configured": true, 00:11:25.982 "data_offset": 2048, 00:11:25.982 "data_size": 63488 00:11:25.982 }, 00:11:25.982 { 00:11:25.982 "name": "BaseBdev4", 00:11:25.982 "uuid": "b15ac344-f6d5-5d7f-98ec-d9630e089d94", 00:11:25.982 "is_configured": true, 00:11:25.982 "data_offset": 2048, 00:11:25.982 "data_size": 63488 00:11:25.982 } 00:11:25.982 ] 00:11:25.982 }' 00:11:25.982 12:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.982 12:31:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.551 12:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:26.551 12:31:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.551 12:31:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.551 [2024-11-19 12:31:31.530425] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:26.551 [2024-11-19 12:31:31.530472] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:26.551 [2024-11-19 12:31:31.533075] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:26.551 [2024-11-19 12:31:31.533160] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.551 [2024-11-19 12:31:31.533283] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:26.552 [2024-11-19 12:31:31.533333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:11:26.552 12:31:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.552 { 00:11:26.552 "results": [ 00:11:26.552 { 00:11:26.552 "job": "raid_bdev1", 00:11:26.552 "core_mask": "0x1", 00:11:26.552 "workload": "randrw", 00:11:26.552 "percentage": 50, 00:11:26.552 "status": "finished", 00:11:26.552 "queue_depth": 1, 00:11:26.552 "io_size": 131072, 00:11:26.552 "runtime": 1.359717, 00:11:26.552 "iops": 12248.872375648756, 00:11:26.552 "mibps": 1531.1090469560945, 00:11:26.552 "io_failed": 0, 00:11:26.552 "io_timeout": 0, 00:11:26.552 "avg_latency_us": 79.05634359772365, 00:11:26.552 "min_latency_us": 21.910917030567685, 00:11:26.552 "max_latency_us": 1423.7624454148472 00:11:26.552 } 00:11:26.552 ], 00:11:26.552 "core_count": 1 00:11:26.552 } 00:11:26.552 12:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 86073 00:11:26.552 12:31:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 86073 ']' 00:11:26.552 12:31:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 86073 00:11:26.552 12:31:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:26.552 12:31:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:26.552 12:31:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86073 00:11:26.552 12:31:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:26.552 12:31:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:26.552 killing process with pid 86073 00:11:26.552 12:31:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86073' 00:11:26.552 12:31:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 86073 00:11:26.552 [2024-11-19 12:31:31.571061] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:26.552 12:31:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 86073 00:11:26.552 [2024-11-19 12:31:31.607929] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:26.811 12:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.DQWn4Yt96J 00:11:26.811 12:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:26.811 12:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:26.811 12:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:26.811 12:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:26.811 12:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:26.811 12:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:26.811 12:31:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:26.811 ************************************ 00:11:26.811 END TEST raid_write_error_test 00:11:26.811 ************************************ 00:11:26.811 00:11:26.811 real 0m3.371s 00:11:26.811 user 0m4.233s 00:11:26.811 sys 0m0.566s 00:11:26.811 12:31:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:26.811 12:31:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.811 12:31:31 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:11:26.811 12:31:31 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:26.811 12:31:31 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:11:26.811 12:31:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:26.811 12:31:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:26.811 12:31:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:26.811 ************************************ 00:11:26.811 START TEST raid_rebuild_test 00:11:26.811 ************************************ 00:11:26.811 12:31:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:11:26.811 12:31:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:26.811 12:31:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:26.811 12:31:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:26.811 12:31:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:26.811 12:31:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:26.811 12:31:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:26.811 12:31:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:26.811 12:31:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:26.811 12:31:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:26.811 12:31:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:26.811 12:31:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:26.811 12:31:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:26.811 12:31:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:26.811 12:31:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:26.811 12:31:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:26.811 12:31:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:26.811 12:31:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:26.811 12:31:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:26.811 12:31:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:26.811 12:31:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:26.811 12:31:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:26.812 12:31:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:26.812 12:31:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:26.812 12:31:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=86201 00:11:26.812 12:31:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:26.812 12:31:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 86201 00:11:26.812 12:31:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 86201 ']' 00:11:26.812 12:31:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.812 12:31:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:26.812 12:31:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.812 12:31:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:26.812 12:31:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.812 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:26.812 Zero copy mechanism will not be used. 00:11:26.812 [2024-11-19 12:31:32.023078] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:26.812 [2024-11-19 12:31:32.023199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86201 ] 00:11:27.072 [2024-11-19 12:31:32.184280] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.072 [2024-11-19 12:31:32.233613] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.072 [2024-11-19 12:31:32.277383] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.072 [2024-11-19 12:31:32.277420] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.643 12:31:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:27.643 12:31:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:11:27.643 12:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:27.643 12:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:27.643 12:31:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.643 12:31:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.643 BaseBdev1_malloc 00:11:27.643 12:31:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.643 12:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:27.643 12:31:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.643 12:31:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.643 [2024-11-19 12:31:32.888795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:27.643 [2024-11-19 12:31:32.888863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.643 [2024-11-19 12:31:32.888886] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:27.643 [2024-11-19 12:31:32.888900] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.643 [2024-11-19 12:31:32.891012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.643 [2024-11-19 12:31:32.891048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:27.643 BaseBdev1 00:11:27.643 12:31:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.643 12:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:27.643 12:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:27.643 12:31:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.643 12:31:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.904 BaseBdev2_malloc 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.904 [2024-11-19 12:31:32.925823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:27.904 [2024-11-19 12:31:32.925889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.904 [2024-11-19 12:31:32.925914] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:27.904 [2024-11-19 12:31:32.925925] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.904 [2024-11-19 12:31:32.928374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.904 [2024-11-19 12:31:32.928412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:27.904 BaseBdev2 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.904 spare_malloc 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.904 spare_delay 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.904 [2024-11-19 12:31:32.954593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:27.904 [2024-11-19 12:31:32.954652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.904 [2024-11-19 12:31:32.954675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:27.904 [2024-11-19 12:31:32.954683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.904 [2024-11-19 12:31:32.956756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.904 [2024-11-19 12:31:32.956803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:27.904 spare 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.904 [2024-11-19 12:31:32.966616] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.904 [2024-11-19 12:31:32.968508] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:27.904 [2024-11-19 12:31:32.968640] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:27.904 [2024-11-19 12:31:32.968655] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:27.904 [2024-11-19 12:31:32.968895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:27.904 [2024-11-19 12:31:32.969012] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:27.904 [2024-11-19 12:31:32.969025] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:27.904 [2024-11-19 12:31:32.969140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.904 12:31:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.904 12:31:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.904 "name": "raid_bdev1", 00:11:27.904 "uuid": "cb96ee58-821e-4d9d-8b15-3b2016134f0a", 00:11:27.904 "strip_size_kb": 0, 00:11:27.904 "state": "online", 00:11:27.904 "raid_level": "raid1", 00:11:27.904 "superblock": false, 00:11:27.904 "num_base_bdevs": 2, 00:11:27.904 "num_base_bdevs_discovered": 2, 00:11:27.904 "num_base_bdevs_operational": 2, 00:11:27.904 "base_bdevs_list": [ 00:11:27.904 { 00:11:27.904 "name": "BaseBdev1", 00:11:27.904 "uuid": "02f9dcf0-462a-58ea-8e6c-c6ad3f345c9f", 00:11:27.904 "is_configured": true, 00:11:27.904 "data_offset": 0, 00:11:27.904 "data_size": 65536 00:11:27.904 }, 00:11:27.904 { 00:11:27.904 "name": "BaseBdev2", 00:11:27.904 "uuid": "8a0b9d90-6112-58a8-a96d-a3a1df619539", 00:11:27.904 "is_configured": true, 00:11:27.904 "data_offset": 0, 00:11:27.904 "data_size": 65536 00:11:27.904 } 00:11:27.904 ] 00:11:27.904 }' 00:11:27.904 12:31:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.904 12:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.164 12:31:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:28.164 12:31:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:28.164 12:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.164 12:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.164 [2024-11-19 12:31:33.406202] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:28.424 12:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.424 12:31:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:28.424 12:31:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.424 12:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.424 12:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.424 12:31:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:28.424 12:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.424 12:31:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:28.424 12:31:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:28.424 12:31:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:28.424 12:31:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:28.424 12:31:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:28.424 12:31:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:28.424 12:31:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:28.424 12:31:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:28.424 12:31:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:28.424 12:31:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:28.424 12:31:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:28.424 12:31:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:28.424 12:31:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:28.425 12:31:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:28.425 [2024-11-19 12:31:33.681495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:28.712 /dev/nbd0 00:11:28.712 12:31:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:28.712 12:31:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:28.712 12:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:28.712 12:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:28.712 12:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:28.712 12:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:28.712 12:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:28.712 12:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:28.712 12:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:28.712 12:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:28.712 12:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.712 1+0 records in 00:11:28.712 1+0 records out 00:11:28.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618269 s, 6.6 MB/s 00:11:28.712 12:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.712 12:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:28.712 12:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.712 12:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:28.712 12:31:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:28.712 12:31:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:28.712 12:31:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:28.712 12:31:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:28.712 12:31:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:28.712 12:31:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:11:32.909 65536+0 records in 00:11:32.909 65536+0 records out 00:11:32.909 33554432 bytes (34 MB, 32 MiB) copied, 4.35239 s, 7.7 MB/s 00:11:32.909 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:32.909 12:31:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:32.909 12:31:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:32.909 12:31:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:32.909 12:31:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:32.909 12:31:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:32.909 12:31:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:33.169 [2024-11-19 12:31:38.287031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.169 12:31:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:33.169 12:31:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:33.169 12:31:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:33.169 12:31:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:33.169 12:31:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:33.169 12:31:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:33.169 12:31:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:33.169 12:31:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:33.169 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:33.169 12:31:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.169 12:31:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.169 [2024-11-19 12:31:38.327072] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:33.169 12:31:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.169 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:33.169 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:33.169 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:33.169 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.169 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.169 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:33.169 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.169 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.169 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.169 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.169 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.169 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.169 12:31:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.169 12:31:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.169 12:31:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.169 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.169 "name": "raid_bdev1", 00:11:33.169 "uuid": "cb96ee58-821e-4d9d-8b15-3b2016134f0a", 00:11:33.169 "strip_size_kb": 0, 00:11:33.169 "state": "online", 00:11:33.169 "raid_level": "raid1", 00:11:33.169 "superblock": false, 00:11:33.169 "num_base_bdevs": 2, 00:11:33.169 "num_base_bdevs_discovered": 1, 00:11:33.169 "num_base_bdevs_operational": 1, 00:11:33.169 "base_bdevs_list": [ 00:11:33.169 { 00:11:33.169 "name": null, 00:11:33.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.169 "is_configured": false, 00:11:33.169 "data_offset": 0, 00:11:33.169 "data_size": 65536 00:11:33.169 }, 00:11:33.169 { 00:11:33.169 "name": "BaseBdev2", 00:11:33.169 "uuid": "8a0b9d90-6112-58a8-a96d-a3a1df619539", 00:11:33.169 "is_configured": true, 00:11:33.169 "data_offset": 0, 00:11:33.169 "data_size": 65536 00:11:33.169 } 00:11:33.169 ] 00:11:33.169 }' 00:11:33.169 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.169 12:31:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.739 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:33.739 12:31:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.739 12:31:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.739 [2024-11-19 12:31:38.750419] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:33.739 [2024-11-19 12:31:38.754776] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09a30 00:11:33.739 12:31:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.739 12:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:33.739 [2024-11-19 12:31:38.756628] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:34.679 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:34.679 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:34.679 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:34.679 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:34.679 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:34.679 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.679 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.679 12:31:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.679 12:31:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.679 12:31:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.679 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:34.679 "name": "raid_bdev1", 00:11:34.679 "uuid": "cb96ee58-821e-4d9d-8b15-3b2016134f0a", 00:11:34.679 "strip_size_kb": 0, 00:11:34.679 "state": "online", 00:11:34.679 "raid_level": "raid1", 00:11:34.679 "superblock": false, 00:11:34.679 "num_base_bdevs": 2, 00:11:34.679 "num_base_bdevs_discovered": 2, 00:11:34.679 "num_base_bdevs_operational": 2, 00:11:34.679 "process": { 00:11:34.679 "type": "rebuild", 00:11:34.679 "target": "spare", 00:11:34.679 "progress": { 00:11:34.679 "blocks": 20480, 00:11:34.679 "percent": 31 00:11:34.679 } 00:11:34.679 }, 00:11:34.679 "base_bdevs_list": [ 00:11:34.679 { 00:11:34.679 "name": "spare", 00:11:34.679 "uuid": "770a321b-10ef-5fa5-abf4-39fab005e595", 00:11:34.679 "is_configured": true, 00:11:34.679 "data_offset": 0, 00:11:34.679 "data_size": 65536 00:11:34.679 }, 00:11:34.679 { 00:11:34.679 "name": "BaseBdev2", 00:11:34.679 "uuid": "8a0b9d90-6112-58a8-a96d-a3a1df619539", 00:11:34.679 "is_configured": true, 00:11:34.679 "data_offset": 0, 00:11:34.679 "data_size": 65536 00:11:34.679 } 00:11:34.679 ] 00:11:34.679 }' 00:11:34.679 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:34.679 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:34.679 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:34.679 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:34.679 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:34.679 12:31:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.679 12:31:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.679 [2024-11-19 12:31:39.901444] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:34.939 [2024-11-19 12:31:39.961968] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:34.939 [2024-11-19 12:31:39.962158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.939 [2024-11-19 12:31:39.962197] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:34.939 [2024-11-19 12:31:39.962217] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:34.939 12:31:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.939 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:34.939 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.939 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.939 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.939 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.939 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:34.939 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.939 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.939 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.939 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.939 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.940 12:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.940 12:31:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.940 12:31:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.940 12:31:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.940 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.940 "name": "raid_bdev1", 00:11:34.940 "uuid": "cb96ee58-821e-4d9d-8b15-3b2016134f0a", 00:11:34.940 "strip_size_kb": 0, 00:11:34.940 "state": "online", 00:11:34.940 "raid_level": "raid1", 00:11:34.940 "superblock": false, 00:11:34.940 "num_base_bdevs": 2, 00:11:34.940 "num_base_bdevs_discovered": 1, 00:11:34.940 "num_base_bdevs_operational": 1, 00:11:34.940 "base_bdevs_list": [ 00:11:34.940 { 00:11:34.940 "name": null, 00:11:34.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.940 "is_configured": false, 00:11:34.940 "data_offset": 0, 00:11:34.940 "data_size": 65536 00:11:34.940 }, 00:11:34.940 { 00:11:34.940 "name": "BaseBdev2", 00:11:34.940 "uuid": "8a0b9d90-6112-58a8-a96d-a3a1df619539", 00:11:34.940 "is_configured": true, 00:11:34.940 "data_offset": 0, 00:11:34.940 "data_size": 65536 00:11:34.940 } 00:11:34.940 ] 00:11:34.940 }' 00:11:34.940 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.940 12:31:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.199 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:35.199 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:35.199 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:35.199 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:35.199 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:35.199 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.199 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.199 12:31:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.199 12:31:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.199 12:31:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.199 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:35.199 "name": "raid_bdev1", 00:11:35.199 "uuid": "cb96ee58-821e-4d9d-8b15-3b2016134f0a", 00:11:35.199 "strip_size_kb": 0, 00:11:35.199 "state": "online", 00:11:35.199 "raid_level": "raid1", 00:11:35.199 "superblock": false, 00:11:35.199 "num_base_bdevs": 2, 00:11:35.199 "num_base_bdevs_discovered": 1, 00:11:35.199 "num_base_bdevs_operational": 1, 00:11:35.199 "base_bdevs_list": [ 00:11:35.199 { 00:11:35.199 "name": null, 00:11:35.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.199 "is_configured": false, 00:11:35.199 "data_offset": 0, 00:11:35.199 "data_size": 65536 00:11:35.199 }, 00:11:35.199 { 00:11:35.199 "name": "BaseBdev2", 00:11:35.199 "uuid": "8a0b9d90-6112-58a8-a96d-a3a1df619539", 00:11:35.199 "is_configured": true, 00:11:35.199 "data_offset": 0, 00:11:35.199 "data_size": 65536 00:11:35.199 } 00:11:35.199 ] 00:11:35.199 }' 00:11:35.199 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:35.199 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:35.199 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:35.458 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:35.459 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:35.459 12:31:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.459 12:31:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.459 [2024-11-19 12:31:40.494208] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:35.459 [2024-11-19 12:31:40.498566] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09b00 00:11:35.459 [2024-11-19 12:31:40.500401] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:35.459 12:31:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.459 12:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:36.448 "name": "raid_bdev1", 00:11:36.448 "uuid": "cb96ee58-821e-4d9d-8b15-3b2016134f0a", 00:11:36.448 "strip_size_kb": 0, 00:11:36.448 "state": "online", 00:11:36.448 "raid_level": "raid1", 00:11:36.448 "superblock": false, 00:11:36.448 "num_base_bdevs": 2, 00:11:36.448 "num_base_bdevs_discovered": 2, 00:11:36.448 "num_base_bdevs_operational": 2, 00:11:36.448 "process": { 00:11:36.448 "type": "rebuild", 00:11:36.448 "target": "spare", 00:11:36.448 "progress": { 00:11:36.448 "blocks": 20480, 00:11:36.448 "percent": 31 00:11:36.448 } 00:11:36.448 }, 00:11:36.448 "base_bdevs_list": [ 00:11:36.448 { 00:11:36.448 "name": "spare", 00:11:36.448 "uuid": "770a321b-10ef-5fa5-abf4-39fab005e595", 00:11:36.448 "is_configured": true, 00:11:36.448 "data_offset": 0, 00:11:36.448 "data_size": 65536 00:11:36.448 }, 00:11:36.448 { 00:11:36.448 "name": "BaseBdev2", 00:11:36.448 "uuid": "8a0b9d90-6112-58a8-a96d-a3a1df619539", 00:11:36.448 "is_configured": true, 00:11:36.448 "data_offset": 0, 00:11:36.448 "data_size": 65536 00:11:36.448 } 00:11:36.448 ] 00:11:36.448 }' 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=294 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:36.448 "name": "raid_bdev1", 00:11:36.448 "uuid": "cb96ee58-821e-4d9d-8b15-3b2016134f0a", 00:11:36.448 "strip_size_kb": 0, 00:11:36.448 "state": "online", 00:11:36.448 "raid_level": "raid1", 00:11:36.448 "superblock": false, 00:11:36.448 "num_base_bdevs": 2, 00:11:36.448 "num_base_bdevs_discovered": 2, 00:11:36.448 "num_base_bdevs_operational": 2, 00:11:36.448 "process": { 00:11:36.448 "type": "rebuild", 00:11:36.448 "target": "spare", 00:11:36.448 "progress": { 00:11:36.448 "blocks": 22528, 00:11:36.448 "percent": 34 00:11:36.448 } 00:11:36.448 }, 00:11:36.448 "base_bdevs_list": [ 00:11:36.448 { 00:11:36.448 "name": "spare", 00:11:36.448 "uuid": "770a321b-10ef-5fa5-abf4-39fab005e595", 00:11:36.448 "is_configured": true, 00:11:36.448 "data_offset": 0, 00:11:36.448 "data_size": 65536 00:11:36.448 }, 00:11:36.448 { 00:11:36.448 "name": "BaseBdev2", 00:11:36.448 "uuid": "8a0b9d90-6112-58a8-a96d-a3a1df619539", 00:11:36.448 "is_configured": true, 00:11:36.448 "data_offset": 0, 00:11:36.448 "data_size": 65536 00:11:36.448 } 00:11:36.448 ] 00:11:36.448 }' 00:11:36.448 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:36.715 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:36.715 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:36.715 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:36.715 12:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:37.654 12:31:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:37.654 12:31:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:37.654 12:31:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:37.654 12:31:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:37.654 12:31:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:37.654 12:31:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:37.654 12:31:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.654 12:31:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.654 12:31:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.654 12:31:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.654 12:31:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.654 12:31:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:37.654 "name": "raid_bdev1", 00:11:37.654 "uuid": "cb96ee58-821e-4d9d-8b15-3b2016134f0a", 00:11:37.654 "strip_size_kb": 0, 00:11:37.654 "state": "online", 00:11:37.654 "raid_level": "raid1", 00:11:37.654 "superblock": false, 00:11:37.654 "num_base_bdevs": 2, 00:11:37.654 "num_base_bdevs_discovered": 2, 00:11:37.654 "num_base_bdevs_operational": 2, 00:11:37.654 "process": { 00:11:37.654 "type": "rebuild", 00:11:37.654 "target": "spare", 00:11:37.654 "progress": { 00:11:37.654 "blocks": 45056, 00:11:37.654 "percent": 68 00:11:37.654 } 00:11:37.654 }, 00:11:37.654 "base_bdevs_list": [ 00:11:37.654 { 00:11:37.654 "name": "spare", 00:11:37.655 "uuid": "770a321b-10ef-5fa5-abf4-39fab005e595", 00:11:37.655 "is_configured": true, 00:11:37.655 "data_offset": 0, 00:11:37.655 "data_size": 65536 00:11:37.655 }, 00:11:37.655 { 00:11:37.655 "name": "BaseBdev2", 00:11:37.655 "uuid": "8a0b9d90-6112-58a8-a96d-a3a1df619539", 00:11:37.655 "is_configured": true, 00:11:37.655 "data_offset": 0, 00:11:37.655 "data_size": 65536 00:11:37.655 } 00:11:37.655 ] 00:11:37.655 }' 00:11:37.655 12:31:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:37.655 12:31:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:37.655 12:31:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:37.914 12:31:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:37.914 12:31:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:38.483 [2024-11-19 12:31:43.714190] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:38.483 [2024-11-19 12:31:43.714282] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:38.483 [2024-11-19 12:31:43.714333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.742 12:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:38.742 12:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:38.742 12:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:38.742 12:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:38.742 12:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:38.742 12:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:38.742 12:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.742 12:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.743 12:31:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.743 12:31:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.743 12:31:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.743 12:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:38.743 "name": "raid_bdev1", 00:11:38.743 "uuid": "cb96ee58-821e-4d9d-8b15-3b2016134f0a", 00:11:38.743 "strip_size_kb": 0, 00:11:38.743 "state": "online", 00:11:38.743 "raid_level": "raid1", 00:11:38.743 "superblock": false, 00:11:38.743 "num_base_bdevs": 2, 00:11:38.743 "num_base_bdevs_discovered": 2, 00:11:38.743 "num_base_bdevs_operational": 2, 00:11:38.743 "base_bdevs_list": [ 00:11:38.743 { 00:11:38.743 "name": "spare", 00:11:38.743 "uuid": "770a321b-10ef-5fa5-abf4-39fab005e595", 00:11:38.743 "is_configured": true, 00:11:38.743 "data_offset": 0, 00:11:38.743 "data_size": 65536 00:11:38.743 }, 00:11:38.743 { 00:11:38.743 "name": "BaseBdev2", 00:11:38.743 "uuid": "8a0b9d90-6112-58a8-a96d-a3a1df619539", 00:11:38.743 "is_configured": true, 00:11:38.743 "data_offset": 0, 00:11:38.743 "data_size": 65536 00:11:38.743 } 00:11:38.743 ] 00:11:38.743 }' 00:11:38.743 12:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:39.002 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:39.002 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:39.002 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:39.002 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:11:39.002 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:39.002 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:39.002 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:39.002 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:39.002 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:39.002 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.002 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.002 12:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.002 12:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.002 12:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.002 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:39.002 "name": "raid_bdev1", 00:11:39.002 "uuid": "cb96ee58-821e-4d9d-8b15-3b2016134f0a", 00:11:39.002 "strip_size_kb": 0, 00:11:39.002 "state": "online", 00:11:39.002 "raid_level": "raid1", 00:11:39.002 "superblock": false, 00:11:39.002 "num_base_bdevs": 2, 00:11:39.002 "num_base_bdevs_discovered": 2, 00:11:39.002 "num_base_bdevs_operational": 2, 00:11:39.002 "base_bdevs_list": [ 00:11:39.002 { 00:11:39.002 "name": "spare", 00:11:39.002 "uuid": "770a321b-10ef-5fa5-abf4-39fab005e595", 00:11:39.002 "is_configured": true, 00:11:39.002 "data_offset": 0, 00:11:39.002 "data_size": 65536 00:11:39.002 }, 00:11:39.002 { 00:11:39.002 "name": "BaseBdev2", 00:11:39.002 "uuid": "8a0b9d90-6112-58a8-a96d-a3a1df619539", 00:11:39.002 "is_configured": true, 00:11:39.002 "data_offset": 0, 00:11:39.002 "data_size": 65536 00:11:39.002 } 00:11:39.002 ] 00:11:39.002 }' 00:11:39.002 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:39.002 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:39.003 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:39.003 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:39.003 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:39.003 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.003 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.003 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.003 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.003 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:39.003 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.003 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.003 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.003 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.003 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.003 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.003 12:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.003 12:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.003 12:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.003 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.003 "name": "raid_bdev1", 00:11:39.003 "uuid": "cb96ee58-821e-4d9d-8b15-3b2016134f0a", 00:11:39.003 "strip_size_kb": 0, 00:11:39.003 "state": "online", 00:11:39.003 "raid_level": "raid1", 00:11:39.003 "superblock": false, 00:11:39.003 "num_base_bdevs": 2, 00:11:39.003 "num_base_bdevs_discovered": 2, 00:11:39.003 "num_base_bdevs_operational": 2, 00:11:39.003 "base_bdevs_list": [ 00:11:39.003 { 00:11:39.003 "name": "spare", 00:11:39.003 "uuid": "770a321b-10ef-5fa5-abf4-39fab005e595", 00:11:39.003 "is_configured": true, 00:11:39.003 "data_offset": 0, 00:11:39.003 "data_size": 65536 00:11:39.003 }, 00:11:39.003 { 00:11:39.003 "name": "BaseBdev2", 00:11:39.003 "uuid": "8a0b9d90-6112-58a8-a96d-a3a1df619539", 00:11:39.003 "is_configured": true, 00:11:39.003 "data_offset": 0, 00:11:39.003 "data_size": 65536 00:11:39.003 } 00:11:39.003 ] 00:11:39.003 }' 00:11:39.003 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.003 12:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.571 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:39.571 12:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.571 12:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.571 [2024-11-19 12:31:44.598866] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:39.571 [2024-11-19 12:31:44.598964] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:39.571 [2024-11-19 12:31:44.599076] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:39.571 [2024-11-19 12:31:44.599179] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:39.571 [2024-11-19 12:31:44.599233] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:39.571 12:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.571 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.571 12:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.571 12:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.571 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:11:39.571 12:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.571 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:39.571 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:39.571 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:39.571 12:31:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:39.571 12:31:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:39.571 12:31:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:39.571 12:31:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:39.571 12:31:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:39.571 12:31:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:39.571 12:31:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:39.571 12:31:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:39.571 12:31:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:39.571 12:31:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:39.830 /dev/nbd0 00:11:39.830 12:31:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:39.830 12:31:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:39.831 12:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:39.831 12:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:39.831 12:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:39.831 12:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:39.831 12:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:39.831 12:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:39.831 12:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:39.831 12:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:39.831 12:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:39.831 1+0 records in 00:11:39.831 1+0 records out 00:11:39.831 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384341 s, 10.7 MB/s 00:11:39.831 12:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:39.831 12:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:39.831 12:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:39.831 12:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:39.831 12:31:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:39.831 12:31:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:39.831 12:31:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:39.831 12:31:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:40.090 /dev/nbd1 00:11:40.090 12:31:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:40.090 12:31:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:40.090 12:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:40.090 12:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:40.090 12:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:40.090 12:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:40.090 12:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:40.090 12:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:40.090 12:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:40.090 12:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:40.090 12:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:40.090 1+0 records in 00:11:40.090 1+0 records out 00:11:40.090 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312089 s, 13.1 MB/s 00:11:40.090 12:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:40.090 12:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:40.090 12:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:40.090 12:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:40.090 12:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:40.090 12:31:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:40.090 12:31:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:40.090 12:31:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:40.090 12:31:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:40.090 12:31:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:40.090 12:31:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:40.090 12:31:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:40.090 12:31:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:40.090 12:31:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:40.090 12:31:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:40.350 12:31:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:40.350 12:31:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:40.350 12:31:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:40.350 12:31:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:40.350 12:31:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:40.350 12:31:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:40.350 12:31:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:40.350 12:31:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:40.350 12:31:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:40.350 12:31:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:40.611 12:31:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:40.611 12:31:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:40.611 12:31:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:40.611 12:31:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:40.611 12:31:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:40.611 12:31:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:40.611 12:31:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:40.611 12:31:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:40.611 12:31:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:40.611 12:31:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 86201 00:11:40.611 12:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 86201 ']' 00:11:40.611 12:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 86201 00:11:40.611 12:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:11:40.611 12:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:40.611 12:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86201 00:11:40.611 killing process with pid 86201 00:11:40.611 Received shutdown signal, test time was about 60.000000 seconds 00:11:40.611 00:11:40.611 Latency(us) 00:11:40.611 [2024-11-19T12:31:45.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:40.611 [2024-11-19T12:31:45.872Z] =================================================================================================================== 00:11:40.611 [2024-11-19T12:31:45.872Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:40.611 12:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:40.611 12:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:40.611 12:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86201' 00:11:40.611 12:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 86201 00:11:40.611 [2024-11-19 12:31:45.749394] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:40.611 12:31:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 86201 00:11:40.611 [2024-11-19 12:31:45.782208] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:11:40.871 00:11:40.871 real 0m14.099s 00:11:40.871 user 0m15.621s 00:11:40.871 sys 0m3.089s 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:40.871 ************************************ 00:11:40.871 END TEST raid_rebuild_test 00:11:40.871 ************************************ 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.871 12:31:46 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:11:40.871 12:31:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:40.871 12:31:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:40.871 12:31:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:40.871 ************************************ 00:11:40.871 START TEST raid_rebuild_test_sb 00:11:40.871 ************************************ 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=86607 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 86607 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 86607 ']' 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:40.871 12:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.131 [2024-11-19 12:31:46.205979] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:41.131 [2024-11-19 12:31:46.206214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86607 ] 00:11:41.131 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:41.131 Zero copy mechanism will not be used. 00:11:41.131 [2024-11-19 12:31:46.372664] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.390 [2024-11-19 12:31:46.420863] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.390 [2024-11-19 12:31:46.465016] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:41.390 [2024-11-19 12:31:46.465133] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.991 BaseBdev1_malloc 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.991 [2024-11-19 12:31:47.056185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:41.991 [2024-11-19 12:31:47.056336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.991 [2024-11-19 12:31:47.056364] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:41.991 [2024-11-19 12:31:47.056379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.991 [2024-11-19 12:31:47.058421] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.991 [2024-11-19 12:31:47.058460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:41.991 BaseBdev1 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.991 BaseBdev2_malloc 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.991 [2024-11-19 12:31:47.087712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:41.991 [2024-11-19 12:31:47.087804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.991 [2024-11-19 12:31:47.087831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:41.991 [2024-11-19 12:31:47.087844] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.991 [2024-11-19 12:31:47.090576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.991 [2024-11-19 12:31:47.090624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:41.991 BaseBdev2 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.991 spare_malloc 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.991 spare_delay 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.991 [2024-11-19 12:31:47.124597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:41.991 [2024-11-19 12:31:47.124665] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.991 [2024-11-19 12:31:47.124689] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:41.991 [2024-11-19 12:31:47.124697] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.991 [2024-11-19 12:31:47.126855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.991 [2024-11-19 12:31:47.126892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:41.991 spare 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.991 [2024-11-19 12:31:47.136621] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:41.991 [2024-11-19 12:31:47.138494] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:41.991 [2024-11-19 12:31:47.138643] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:41.991 [2024-11-19 12:31:47.138655] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:41.991 [2024-11-19 12:31:47.138949] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:41.991 [2024-11-19 12:31:47.139092] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:41.991 [2024-11-19 12:31:47.139105] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:41.991 [2024-11-19 12:31:47.139230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.991 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.991 "name": "raid_bdev1", 00:11:41.991 "uuid": "58f80d82-0178-4090-9209-c89ca6e93385", 00:11:41.991 "strip_size_kb": 0, 00:11:41.991 "state": "online", 00:11:41.991 "raid_level": "raid1", 00:11:41.992 "superblock": true, 00:11:41.992 "num_base_bdevs": 2, 00:11:41.992 "num_base_bdevs_discovered": 2, 00:11:41.992 "num_base_bdevs_operational": 2, 00:11:41.992 "base_bdevs_list": [ 00:11:41.992 { 00:11:41.992 "name": "BaseBdev1", 00:11:41.992 "uuid": "fa7d5a3d-671c-538f-bb8a-c86a937a34b8", 00:11:41.992 "is_configured": true, 00:11:41.992 "data_offset": 2048, 00:11:41.992 "data_size": 63488 00:11:41.992 }, 00:11:41.992 { 00:11:41.992 "name": "BaseBdev2", 00:11:41.992 "uuid": "bcf8fd58-f370-50e9-9c54-a6e18b20aec6", 00:11:41.992 "is_configured": true, 00:11:41.992 "data_offset": 2048, 00:11:41.992 "data_size": 63488 00:11:41.992 } 00:11:41.992 ] 00:11:41.992 }' 00:11:41.992 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.992 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.560 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:42.560 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.560 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:42.560 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.560 [2024-11-19 12:31:47.608104] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:42.560 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.560 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:42.560 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.560 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.560 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.560 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:42.560 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.560 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:42.560 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:42.560 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:42.560 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:42.560 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:42.560 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:42.560 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:42.560 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:42.560 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:42.560 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:42.560 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:42.560 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:42.560 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:42.560 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:42.820 [2024-11-19 12:31:47.883401] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:42.820 /dev/nbd0 00:11:42.820 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:42.820 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:42.820 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:42.820 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:42.820 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:42.820 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:42.820 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:42.820 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:42.820 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:42.820 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:42.820 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:42.820 1+0 records in 00:11:42.820 1+0 records out 00:11:42.820 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519865 s, 7.9 MB/s 00:11:42.820 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:42.820 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:42.820 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:42.820 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:42.820 12:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:42.820 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:42.820 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:42.820 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:42.820 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:42.820 12:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:11:48.107 63488+0 records in 00:11:48.107 63488+0 records out 00:11:48.107 32505856 bytes (33 MB, 31 MiB) copied, 4.47452 s, 7.3 MB/s 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:48.107 [2024-11-19 12:31:52.630171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.107 [2024-11-19 12:31:52.670317] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.107 "name": "raid_bdev1", 00:11:48.107 "uuid": "58f80d82-0178-4090-9209-c89ca6e93385", 00:11:48.107 "strip_size_kb": 0, 00:11:48.107 "state": "online", 00:11:48.107 "raid_level": "raid1", 00:11:48.107 "superblock": true, 00:11:48.107 "num_base_bdevs": 2, 00:11:48.107 "num_base_bdevs_discovered": 1, 00:11:48.107 "num_base_bdevs_operational": 1, 00:11:48.107 "base_bdevs_list": [ 00:11:48.107 { 00:11:48.107 "name": null, 00:11:48.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.107 "is_configured": false, 00:11:48.107 "data_offset": 0, 00:11:48.107 "data_size": 63488 00:11:48.107 }, 00:11:48.107 { 00:11:48.107 "name": "BaseBdev2", 00:11:48.107 "uuid": "bcf8fd58-f370-50e9-9c54-a6e18b20aec6", 00:11:48.107 "is_configured": true, 00:11:48.107 "data_offset": 2048, 00:11:48.107 "data_size": 63488 00:11:48.107 } 00:11:48.107 ] 00:11:48.107 }' 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.107 12:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.107 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:48.107 12:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.107 12:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.107 [2024-11-19 12:31:53.121622] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:48.107 [2024-11-19 12:31:53.126031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca31c0 00:11:48.107 12:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.107 12:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:48.107 [2024-11-19 12:31:53.128005] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:49.053 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:49.053 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:49.053 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:49.053 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:49.053 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:49.053 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.053 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.053 12:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.053 12:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.053 12:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.053 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:49.053 "name": "raid_bdev1", 00:11:49.053 "uuid": "58f80d82-0178-4090-9209-c89ca6e93385", 00:11:49.053 "strip_size_kb": 0, 00:11:49.053 "state": "online", 00:11:49.053 "raid_level": "raid1", 00:11:49.053 "superblock": true, 00:11:49.053 "num_base_bdevs": 2, 00:11:49.053 "num_base_bdevs_discovered": 2, 00:11:49.053 "num_base_bdevs_operational": 2, 00:11:49.053 "process": { 00:11:49.053 "type": "rebuild", 00:11:49.053 "target": "spare", 00:11:49.053 "progress": { 00:11:49.053 "blocks": 20480, 00:11:49.053 "percent": 32 00:11:49.053 } 00:11:49.053 }, 00:11:49.053 "base_bdevs_list": [ 00:11:49.053 { 00:11:49.053 "name": "spare", 00:11:49.053 "uuid": "03f9ca20-9cc7-5c89-a7e2-54dd06009538", 00:11:49.053 "is_configured": true, 00:11:49.053 "data_offset": 2048, 00:11:49.053 "data_size": 63488 00:11:49.053 }, 00:11:49.053 { 00:11:49.053 "name": "BaseBdev2", 00:11:49.053 "uuid": "bcf8fd58-f370-50e9-9c54-a6e18b20aec6", 00:11:49.053 "is_configured": true, 00:11:49.053 "data_offset": 2048, 00:11:49.053 "data_size": 63488 00:11:49.053 } 00:11:49.053 ] 00:11:49.053 }' 00:11:49.053 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:49.053 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:49.053 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:49.053 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:49.053 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:49.053 12:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.053 12:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.053 [2024-11-19 12:31:54.289029] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:49.313 [2024-11-19 12:31:54.333478] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:49.313 [2024-11-19 12:31:54.333579] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.313 [2024-11-19 12:31:54.333599] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:49.313 [2024-11-19 12:31:54.333607] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:49.313 12:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.313 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:49.313 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:49.313 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:49.314 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.314 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.314 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:49.314 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.314 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.314 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.314 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.314 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.314 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.314 12:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.314 12:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.314 12:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.314 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.314 "name": "raid_bdev1", 00:11:49.314 "uuid": "58f80d82-0178-4090-9209-c89ca6e93385", 00:11:49.314 "strip_size_kb": 0, 00:11:49.314 "state": "online", 00:11:49.314 "raid_level": "raid1", 00:11:49.314 "superblock": true, 00:11:49.314 "num_base_bdevs": 2, 00:11:49.314 "num_base_bdevs_discovered": 1, 00:11:49.314 "num_base_bdevs_operational": 1, 00:11:49.314 "base_bdevs_list": [ 00:11:49.314 { 00:11:49.314 "name": null, 00:11:49.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.314 "is_configured": false, 00:11:49.314 "data_offset": 0, 00:11:49.314 "data_size": 63488 00:11:49.314 }, 00:11:49.314 { 00:11:49.314 "name": "BaseBdev2", 00:11:49.314 "uuid": "bcf8fd58-f370-50e9-9c54-a6e18b20aec6", 00:11:49.314 "is_configured": true, 00:11:49.314 "data_offset": 2048, 00:11:49.314 "data_size": 63488 00:11:49.314 } 00:11:49.314 ] 00:11:49.314 }' 00:11:49.314 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.314 12:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.573 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:49.573 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:49.573 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:49.573 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:49.573 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:49.573 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.573 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.573 12:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.573 12:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.573 12:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.574 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:49.574 "name": "raid_bdev1", 00:11:49.574 "uuid": "58f80d82-0178-4090-9209-c89ca6e93385", 00:11:49.574 "strip_size_kb": 0, 00:11:49.574 "state": "online", 00:11:49.574 "raid_level": "raid1", 00:11:49.574 "superblock": true, 00:11:49.574 "num_base_bdevs": 2, 00:11:49.574 "num_base_bdevs_discovered": 1, 00:11:49.574 "num_base_bdevs_operational": 1, 00:11:49.574 "base_bdevs_list": [ 00:11:49.574 { 00:11:49.574 "name": null, 00:11:49.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.574 "is_configured": false, 00:11:49.574 "data_offset": 0, 00:11:49.574 "data_size": 63488 00:11:49.574 }, 00:11:49.574 { 00:11:49.574 "name": "BaseBdev2", 00:11:49.574 "uuid": "bcf8fd58-f370-50e9-9c54-a6e18b20aec6", 00:11:49.574 "is_configured": true, 00:11:49.574 "data_offset": 2048, 00:11:49.574 "data_size": 63488 00:11:49.574 } 00:11:49.574 ] 00:11:49.574 }' 00:11:49.574 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:49.833 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:49.833 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:49.833 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:49.833 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:49.833 12:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.833 12:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.833 [2024-11-19 12:31:54.881189] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:49.833 [2024-11-19 12:31:54.885511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3290 00:11:49.833 12:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.833 12:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:49.833 [2024-11-19 12:31:54.887572] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:50.768 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:50.768 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:50.768 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:50.768 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:50.768 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:50.768 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.768 12:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.768 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.768 12:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.768 12:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.768 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:50.769 "name": "raid_bdev1", 00:11:50.769 "uuid": "58f80d82-0178-4090-9209-c89ca6e93385", 00:11:50.769 "strip_size_kb": 0, 00:11:50.769 "state": "online", 00:11:50.769 "raid_level": "raid1", 00:11:50.769 "superblock": true, 00:11:50.769 "num_base_bdevs": 2, 00:11:50.769 "num_base_bdevs_discovered": 2, 00:11:50.769 "num_base_bdevs_operational": 2, 00:11:50.769 "process": { 00:11:50.769 "type": "rebuild", 00:11:50.769 "target": "spare", 00:11:50.769 "progress": { 00:11:50.769 "blocks": 20480, 00:11:50.769 "percent": 32 00:11:50.769 } 00:11:50.769 }, 00:11:50.769 "base_bdevs_list": [ 00:11:50.769 { 00:11:50.769 "name": "spare", 00:11:50.769 "uuid": "03f9ca20-9cc7-5c89-a7e2-54dd06009538", 00:11:50.769 "is_configured": true, 00:11:50.769 "data_offset": 2048, 00:11:50.769 "data_size": 63488 00:11:50.769 }, 00:11:50.769 { 00:11:50.769 "name": "BaseBdev2", 00:11:50.769 "uuid": "bcf8fd58-f370-50e9-9c54-a6e18b20aec6", 00:11:50.769 "is_configured": true, 00:11:50.769 "data_offset": 2048, 00:11:50.769 "data_size": 63488 00:11:50.769 } 00:11:50.769 ] 00:11:50.769 }' 00:11:50.769 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:50.769 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:50.769 12:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:51.027 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:51.027 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:51.027 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:51.027 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:51.027 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:51.027 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:51.027 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:51.027 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=309 00:11:51.027 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:51.027 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:51.027 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:51.027 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:51.027 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:51.027 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:51.027 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.027 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.027 12:31:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.027 12:31:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.027 12:31:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.027 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:51.027 "name": "raid_bdev1", 00:11:51.027 "uuid": "58f80d82-0178-4090-9209-c89ca6e93385", 00:11:51.027 "strip_size_kb": 0, 00:11:51.027 "state": "online", 00:11:51.027 "raid_level": "raid1", 00:11:51.027 "superblock": true, 00:11:51.027 "num_base_bdevs": 2, 00:11:51.027 "num_base_bdevs_discovered": 2, 00:11:51.027 "num_base_bdevs_operational": 2, 00:11:51.027 "process": { 00:11:51.027 "type": "rebuild", 00:11:51.027 "target": "spare", 00:11:51.027 "progress": { 00:11:51.027 "blocks": 22528, 00:11:51.027 "percent": 35 00:11:51.027 } 00:11:51.027 }, 00:11:51.027 "base_bdevs_list": [ 00:11:51.027 { 00:11:51.027 "name": "spare", 00:11:51.027 "uuid": "03f9ca20-9cc7-5c89-a7e2-54dd06009538", 00:11:51.027 "is_configured": true, 00:11:51.027 "data_offset": 2048, 00:11:51.027 "data_size": 63488 00:11:51.027 }, 00:11:51.027 { 00:11:51.027 "name": "BaseBdev2", 00:11:51.027 "uuid": "bcf8fd58-f370-50e9-9c54-a6e18b20aec6", 00:11:51.027 "is_configured": true, 00:11:51.027 "data_offset": 2048, 00:11:51.027 "data_size": 63488 00:11:51.027 } 00:11:51.027 ] 00:11:51.027 }' 00:11:51.027 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:51.027 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:51.027 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:51.027 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:51.027 12:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:51.964 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:51.964 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:51.964 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:51.964 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:51.964 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:51.964 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:51.964 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.964 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.964 12:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.964 12:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.224 12:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.224 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:52.224 "name": "raid_bdev1", 00:11:52.224 "uuid": "58f80d82-0178-4090-9209-c89ca6e93385", 00:11:52.224 "strip_size_kb": 0, 00:11:52.224 "state": "online", 00:11:52.224 "raid_level": "raid1", 00:11:52.224 "superblock": true, 00:11:52.224 "num_base_bdevs": 2, 00:11:52.224 "num_base_bdevs_discovered": 2, 00:11:52.224 "num_base_bdevs_operational": 2, 00:11:52.224 "process": { 00:11:52.224 "type": "rebuild", 00:11:52.224 "target": "spare", 00:11:52.224 "progress": { 00:11:52.224 "blocks": 47104, 00:11:52.224 "percent": 74 00:11:52.224 } 00:11:52.224 }, 00:11:52.224 "base_bdevs_list": [ 00:11:52.224 { 00:11:52.224 "name": "spare", 00:11:52.224 "uuid": "03f9ca20-9cc7-5c89-a7e2-54dd06009538", 00:11:52.224 "is_configured": true, 00:11:52.224 "data_offset": 2048, 00:11:52.224 "data_size": 63488 00:11:52.224 }, 00:11:52.224 { 00:11:52.224 "name": "BaseBdev2", 00:11:52.224 "uuid": "bcf8fd58-f370-50e9-9c54-a6e18b20aec6", 00:11:52.224 "is_configured": true, 00:11:52.224 "data_offset": 2048, 00:11:52.224 "data_size": 63488 00:11:52.224 } 00:11:52.224 ] 00:11:52.224 }' 00:11:52.224 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:52.224 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:52.224 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:52.224 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:52.224 12:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:52.792 [2024-11-19 12:31:58.000243] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:52.792 [2024-11-19 12:31:58.000421] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:52.792 [2024-11-19 12:31:58.000567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:53.364 "name": "raid_bdev1", 00:11:53.364 "uuid": "58f80d82-0178-4090-9209-c89ca6e93385", 00:11:53.364 "strip_size_kb": 0, 00:11:53.364 "state": "online", 00:11:53.364 "raid_level": "raid1", 00:11:53.364 "superblock": true, 00:11:53.364 "num_base_bdevs": 2, 00:11:53.364 "num_base_bdevs_discovered": 2, 00:11:53.364 "num_base_bdevs_operational": 2, 00:11:53.364 "base_bdevs_list": [ 00:11:53.364 { 00:11:53.364 "name": "spare", 00:11:53.364 "uuid": "03f9ca20-9cc7-5c89-a7e2-54dd06009538", 00:11:53.364 "is_configured": true, 00:11:53.364 "data_offset": 2048, 00:11:53.364 "data_size": 63488 00:11:53.364 }, 00:11:53.364 { 00:11:53.364 "name": "BaseBdev2", 00:11:53.364 "uuid": "bcf8fd58-f370-50e9-9c54-a6e18b20aec6", 00:11:53.364 "is_configured": true, 00:11:53.364 "data_offset": 2048, 00:11:53.364 "data_size": 63488 00:11:53.364 } 00:11:53.364 ] 00:11:53.364 }' 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:53.364 "name": "raid_bdev1", 00:11:53.364 "uuid": "58f80d82-0178-4090-9209-c89ca6e93385", 00:11:53.364 "strip_size_kb": 0, 00:11:53.364 "state": "online", 00:11:53.364 "raid_level": "raid1", 00:11:53.364 "superblock": true, 00:11:53.364 "num_base_bdevs": 2, 00:11:53.364 "num_base_bdevs_discovered": 2, 00:11:53.364 "num_base_bdevs_operational": 2, 00:11:53.364 "base_bdevs_list": [ 00:11:53.364 { 00:11:53.364 "name": "spare", 00:11:53.364 "uuid": "03f9ca20-9cc7-5c89-a7e2-54dd06009538", 00:11:53.364 "is_configured": true, 00:11:53.364 "data_offset": 2048, 00:11:53.364 "data_size": 63488 00:11:53.364 }, 00:11:53.364 { 00:11:53.364 "name": "BaseBdev2", 00:11:53.364 "uuid": "bcf8fd58-f370-50e9-9c54-a6e18b20aec6", 00:11:53.364 "is_configured": true, 00:11:53.364 "data_offset": 2048, 00:11:53.364 "data_size": 63488 00:11:53.364 } 00:11:53.364 ] 00:11:53.364 }' 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.364 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.630 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.630 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.630 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.630 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.630 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.630 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.630 "name": "raid_bdev1", 00:11:53.630 "uuid": "58f80d82-0178-4090-9209-c89ca6e93385", 00:11:53.630 "strip_size_kb": 0, 00:11:53.630 "state": "online", 00:11:53.630 "raid_level": "raid1", 00:11:53.630 "superblock": true, 00:11:53.630 "num_base_bdevs": 2, 00:11:53.630 "num_base_bdevs_discovered": 2, 00:11:53.630 "num_base_bdevs_operational": 2, 00:11:53.630 "base_bdevs_list": [ 00:11:53.630 { 00:11:53.630 "name": "spare", 00:11:53.630 "uuid": "03f9ca20-9cc7-5c89-a7e2-54dd06009538", 00:11:53.630 "is_configured": true, 00:11:53.630 "data_offset": 2048, 00:11:53.630 "data_size": 63488 00:11:53.630 }, 00:11:53.630 { 00:11:53.630 "name": "BaseBdev2", 00:11:53.630 "uuid": "bcf8fd58-f370-50e9-9c54-a6e18b20aec6", 00:11:53.630 "is_configured": true, 00:11:53.630 "data_offset": 2048, 00:11:53.630 "data_size": 63488 00:11:53.630 } 00:11:53.630 ] 00:11:53.630 }' 00:11:53.630 12:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.630 12:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.898 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:53.898 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.898 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.898 [2024-11-19 12:31:59.051217] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:53.898 [2024-11-19 12:31:59.051266] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:53.898 [2024-11-19 12:31:59.051361] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:53.898 [2024-11-19 12:31:59.051448] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:53.898 [2024-11-19 12:31:59.051468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:53.898 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.898 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.898 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.898 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.898 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:11:53.898 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.898 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:53.898 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:53.898 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:53.898 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:53.898 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:53.898 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:53.898 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:53.898 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:53.898 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:53.898 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:53.898 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:53.898 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:53.898 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:54.158 /dev/nbd0 00:11:54.158 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:54.158 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:54.158 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:54.158 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:54.158 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:54.158 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:54.158 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:54.158 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:54.158 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:54.158 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:54.158 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:54.158 1+0 records in 00:11:54.158 1+0 records out 00:11:54.158 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536067 s, 7.6 MB/s 00:11:54.158 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.158 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:54.158 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.158 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:54.158 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:54.158 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:54.158 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:54.158 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:54.418 /dev/nbd1 00:11:54.418 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:54.418 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:54.418 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:54.418 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:54.418 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:54.418 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:54.418 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:54.418 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:54.418 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:54.418 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:54.418 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:54.418 1+0 records in 00:11:54.418 1+0 records out 00:11:54.418 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290525 s, 14.1 MB/s 00:11:54.678 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.678 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:54.678 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.678 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:54.678 12:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:54.678 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:54.678 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:54.678 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:54.678 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:54.678 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:54.678 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:54.678 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:54.678 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:54.678 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:54.678 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:54.937 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:54.937 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:54.937 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:54.937 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:54.937 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:54.937 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:54.937 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:54.937 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:54.937 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:54.937 12:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.198 [2024-11-19 12:32:00.242003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:55.198 [2024-11-19 12:32:00.242071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.198 [2024-11-19 12:32:00.242091] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:55.198 [2024-11-19 12:32:00.242104] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.198 [2024-11-19 12:32:00.244272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.198 [2024-11-19 12:32:00.244393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:55.198 [2024-11-19 12:32:00.244482] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:55.198 [2024-11-19 12:32:00.244529] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:55.198 [2024-11-19 12:32:00.244652] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:55.198 spare 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.198 [2024-11-19 12:32:00.344574] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:11:55.198 [2024-11-19 12:32:00.344613] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:55.198 [2024-11-19 12:32:00.344969] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1940 00:11:55.198 [2024-11-19 12:32:00.345142] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:11:55.198 [2024-11-19 12:32:00.345155] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:11:55.198 [2024-11-19 12:32:00.345312] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.198 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.198 "name": "raid_bdev1", 00:11:55.199 "uuid": "58f80d82-0178-4090-9209-c89ca6e93385", 00:11:55.199 "strip_size_kb": 0, 00:11:55.199 "state": "online", 00:11:55.199 "raid_level": "raid1", 00:11:55.199 "superblock": true, 00:11:55.199 "num_base_bdevs": 2, 00:11:55.199 "num_base_bdevs_discovered": 2, 00:11:55.199 "num_base_bdevs_operational": 2, 00:11:55.199 "base_bdevs_list": [ 00:11:55.199 { 00:11:55.199 "name": "spare", 00:11:55.199 "uuid": "03f9ca20-9cc7-5c89-a7e2-54dd06009538", 00:11:55.199 "is_configured": true, 00:11:55.199 "data_offset": 2048, 00:11:55.199 "data_size": 63488 00:11:55.199 }, 00:11:55.199 { 00:11:55.199 "name": "BaseBdev2", 00:11:55.199 "uuid": "bcf8fd58-f370-50e9-9c54-a6e18b20aec6", 00:11:55.199 "is_configured": true, 00:11:55.199 "data_offset": 2048, 00:11:55.199 "data_size": 63488 00:11:55.199 } 00:11:55.199 ] 00:11:55.199 }' 00:11:55.199 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.199 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:55.770 "name": "raid_bdev1", 00:11:55.770 "uuid": "58f80d82-0178-4090-9209-c89ca6e93385", 00:11:55.770 "strip_size_kb": 0, 00:11:55.770 "state": "online", 00:11:55.770 "raid_level": "raid1", 00:11:55.770 "superblock": true, 00:11:55.770 "num_base_bdevs": 2, 00:11:55.770 "num_base_bdevs_discovered": 2, 00:11:55.770 "num_base_bdevs_operational": 2, 00:11:55.770 "base_bdevs_list": [ 00:11:55.770 { 00:11:55.770 "name": "spare", 00:11:55.770 "uuid": "03f9ca20-9cc7-5c89-a7e2-54dd06009538", 00:11:55.770 "is_configured": true, 00:11:55.770 "data_offset": 2048, 00:11:55.770 "data_size": 63488 00:11:55.770 }, 00:11:55.770 { 00:11:55.770 "name": "BaseBdev2", 00:11:55.770 "uuid": "bcf8fd58-f370-50e9-9c54-a6e18b20aec6", 00:11:55.770 "is_configured": true, 00:11:55.770 "data_offset": 2048, 00:11:55.770 "data_size": 63488 00:11:55.770 } 00:11:55.770 ] 00:11:55.770 }' 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.770 [2024-11-19 12:32:00.968810] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.770 12:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.770 12:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.770 "name": "raid_bdev1", 00:11:55.770 "uuid": "58f80d82-0178-4090-9209-c89ca6e93385", 00:11:55.770 "strip_size_kb": 0, 00:11:55.770 "state": "online", 00:11:55.770 "raid_level": "raid1", 00:11:55.770 "superblock": true, 00:11:55.770 "num_base_bdevs": 2, 00:11:55.770 "num_base_bdevs_discovered": 1, 00:11:55.770 "num_base_bdevs_operational": 1, 00:11:55.770 "base_bdevs_list": [ 00:11:55.770 { 00:11:55.770 "name": null, 00:11:55.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.770 "is_configured": false, 00:11:55.770 "data_offset": 0, 00:11:55.770 "data_size": 63488 00:11:55.770 }, 00:11:55.770 { 00:11:55.770 "name": "BaseBdev2", 00:11:55.770 "uuid": "bcf8fd58-f370-50e9-9c54-a6e18b20aec6", 00:11:55.770 "is_configured": true, 00:11:55.770 "data_offset": 2048, 00:11:55.770 "data_size": 63488 00:11:55.770 } 00:11:55.770 ] 00:11:55.770 }' 00:11:55.770 12:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.770 12:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.339 12:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:56.339 12:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.339 12:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.339 [2024-11-19 12:32:01.420059] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:56.339 [2024-11-19 12:32:01.420354] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:56.339 [2024-11-19 12:32:01.420413] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:56.339 [2024-11-19 12:32:01.420483] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:56.339 [2024-11-19 12:32:01.424528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1a10 00:11:56.339 12:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.339 12:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:56.339 [2024-11-19 12:32:01.426452] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:57.283 12:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:57.283 12:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:57.283 12:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:57.283 12:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:57.283 12:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:57.283 12:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.283 12:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.283 12:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.283 12:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.283 12:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.283 12:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:57.283 "name": "raid_bdev1", 00:11:57.283 "uuid": "58f80d82-0178-4090-9209-c89ca6e93385", 00:11:57.283 "strip_size_kb": 0, 00:11:57.283 "state": "online", 00:11:57.283 "raid_level": "raid1", 00:11:57.283 "superblock": true, 00:11:57.283 "num_base_bdevs": 2, 00:11:57.283 "num_base_bdevs_discovered": 2, 00:11:57.283 "num_base_bdevs_operational": 2, 00:11:57.283 "process": { 00:11:57.283 "type": "rebuild", 00:11:57.283 "target": "spare", 00:11:57.283 "progress": { 00:11:57.283 "blocks": 20480, 00:11:57.283 "percent": 32 00:11:57.283 } 00:11:57.283 }, 00:11:57.283 "base_bdevs_list": [ 00:11:57.283 { 00:11:57.283 "name": "spare", 00:11:57.283 "uuid": "03f9ca20-9cc7-5c89-a7e2-54dd06009538", 00:11:57.283 "is_configured": true, 00:11:57.283 "data_offset": 2048, 00:11:57.283 "data_size": 63488 00:11:57.283 }, 00:11:57.283 { 00:11:57.283 "name": "BaseBdev2", 00:11:57.283 "uuid": "bcf8fd58-f370-50e9-9c54-a6e18b20aec6", 00:11:57.283 "is_configured": true, 00:11:57.283 "data_offset": 2048, 00:11:57.283 "data_size": 63488 00:11:57.283 } 00:11:57.283 ] 00:11:57.283 }' 00:11:57.283 12:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:57.283 12:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:57.283 12:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:57.542 12:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:57.542 12:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:57.542 12:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.542 12:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.542 [2024-11-19 12:32:02.587507] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:57.542 [2024-11-19 12:32:02.631397] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:57.542 [2024-11-19 12:32:02.631482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.542 [2024-11-19 12:32:02.631502] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:57.542 [2024-11-19 12:32:02.631509] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:57.542 12:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.542 12:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:57.542 12:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.542 12:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.542 12:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.542 12:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.542 12:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:57.542 12:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.542 12:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.542 12:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.542 12:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.542 12:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.542 12:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.542 12:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.542 12:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.542 12:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.542 12:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.542 "name": "raid_bdev1", 00:11:57.542 "uuid": "58f80d82-0178-4090-9209-c89ca6e93385", 00:11:57.542 "strip_size_kb": 0, 00:11:57.542 "state": "online", 00:11:57.542 "raid_level": "raid1", 00:11:57.542 "superblock": true, 00:11:57.542 "num_base_bdevs": 2, 00:11:57.542 "num_base_bdevs_discovered": 1, 00:11:57.542 "num_base_bdevs_operational": 1, 00:11:57.542 "base_bdevs_list": [ 00:11:57.542 { 00:11:57.542 "name": null, 00:11:57.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.542 "is_configured": false, 00:11:57.542 "data_offset": 0, 00:11:57.542 "data_size": 63488 00:11:57.542 }, 00:11:57.542 { 00:11:57.542 "name": "BaseBdev2", 00:11:57.542 "uuid": "bcf8fd58-f370-50e9-9c54-a6e18b20aec6", 00:11:57.542 "is_configured": true, 00:11:57.542 "data_offset": 2048, 00:11:57.542 "data_size": 63488 00:11:57.542 } 00:11:57.542 ] 00:11:57.542 }' 00:11:57.542 12:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.542 12:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.111 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:58.111 12:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.111 12:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.111 [2024-11-19 12:32:03.103212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:58.111 [2024-11-19 12:32:03.103318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.111 [2024-11-19 12:32:03.103346] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:58.111 [2024-11-19 12:32:03.103355] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.111 [2024-11-19 12:32:03.103859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.111 [2024-11-19 12:32:03.103881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:58.111 [2024-11-19 12:32:03.103973] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:58.111 [2024-11-19 12:32:03.103985] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:58.111 [2024-11-19 12:32:03.104002] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:58.111 [2024-11-19 12:32:03.104040] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:58.111 spare 00:11:58.111 [2024-11-19 12:32:03.108075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:11:58.111 12:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.111 12:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:58.111 [2024-11-19 12:32:03.109942] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:59.048 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:59.048 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:59.048 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:59.048 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:59.048 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:59.048 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.048 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.048 12:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.048 12:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.048 12:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.048 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:59.048 "name": "raid_bdev1", 00:11:59.048 "uuid": "58f80d82-0178-4090-9209-c89ca6e93385", 00:11:59.048 "strip_size_kb": 0, 00:11:59.048 "state": "online", 00:11:59.048 "raid_level": "raid1", 00:11:59.048 "superblock": true, 00:11:59.048 "num_base_bdevs": 2, 00:11:59.048 "num_base_bdevs_discovered": 2, 00:11:59.048 "num_base_bdevs_operational": 2, 00:11:59.048 "process": { 00:11:59.048 "type": "rebuild", 00:11:59.048 "target": "spare", 00:11:59.048 "progress": { 00:11:59.048 "blocks": 20480, 00:11:59.048 "percent": 32 00:11:59.048 } 00:11:59.048 }, 00:11:59.048 "base_bdevs_list": [ 00:11:59.048 { 00:11:59.048 "name": "spare", 00:11:59.048 "uuid": "03f9ca20-9cc7-5c89-a7e2-54dd06009538", 00:11:59.048 "is_configured": true, 00:11:59.048 "data_offset": 2048, 00:11:59.048 "data_size": 63488 00:11:59.048 }, 00:11:59.048 { 00:11:59.048 "name": "BaseBdev2", 00:11:59.048 "uuid": "bcf8fd58-f370-50e9-9c54-a6e18b20aec6", 00:11:59.048 "is_configured": true, 00:11:59.048 "data_offset": 2048, 00:11:59.048 "data_size": 63488 00:11:59.048 } 00:11:59.048 ] 00:11:59.048 }' 00:11:59.048 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:59.048 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:59.048 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:59.048 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:59.048 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:59.048 12:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.048 12:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.048 [2024-11-19 12:32:04.274306] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:59.312 [2024-11-19 12:32:04.314666] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:59.312 [2024-11-19 12:32:04.314804] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.312 [2024-11-19 12:32:04.314822] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:59.312 [2024-11-19 12:32:04.314831] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:59.312 12:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.312 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:59.312 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.312 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.312 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.312 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.312 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:59.312 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.312 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.312 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.312 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.312 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.312 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.312 12:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.312 12:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.312 12:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.312 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.312 "name": "raid_bdev1", 00:11:59.312 "uuid": "58f80d82-0178-4090-9209-c89ca6e93385", 00:11:59.312 "strip_size_kb": 0, 00:11:59.312 "state": "online", 00:11:59.312 "raid_level": "raid1", 00:11:59.312 "superblock": true, 00:11:59.312 "num_base_bdevs": 2, 00:11:59.312 "num_base_bdevs_discovered": 1, 00:11:59.312 "num_base_bdevs_operational": 1, 00:11:59.312 "base_bdevs_list": [ 00:11:59.312 { 00:11:59.312 "name": null, 00:11:59.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.312 "is_configured": false, 00:11:59.312 "data_offset": 0, 00:11:59.312 "data_size": 63488 00:11:59.312 }, 00:11:59.312 { 00:11:59.312 "name": "BaseBdev2", 00:11:59.312 "uuid": "bcf8fd58-f370-50e9-9c54-a6e18b20aec6", 00:11:59.312 "is_configured": true, 00:11:59.312 "data_offset": 2048, 00:11:59.312 "data_size": 63488 00:11:59.312 } 00:11:59.312 ] 00:11:59.312 }' 00:11:59.312 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.312 12:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.574 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:59.574 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:59.574 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:59.574 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:59.574 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:59.574 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.574 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.574 12:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.574 12:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.574 12:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.574 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:59.574 "name": "raid_bdev1", 00:11:59.574 "uuid": "58f80d82-0178-4090-9209-c89ca6e93385", 00:11:59.574 "strip_size_kb": 0, 00:11:59.574 "state": "online", 00:11:59.574 "raid_level": "raid1", 00:11:59.574 "superblock": true, 00:11:59.574 "num_base_bdevs": 2, 00:11:59.574 "num_base_bdevs_discovered": 1, 00:11:59.574 "num_base_bdevs_operational": 1, 00:11:59.574 "base_bdevs_list": [ 00:11:59.574 { 00:11:59.574 "name": null, 00:11:59.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.574 "is_configured": false, 00:11:59.574 "data_offset": 0, 00:11:59.574 "data_size": 63488 00:11:59.574 }, 00:11:59.574 { 00:11:59.574 "name": "BaseBdev2", 00:11:59.574 "uuid": "bcf8fd58-f370-50e9-9c54-a6e18b20aec6", 00:11:59.574 "is_configured": true, 00:11:59.574 "data_offset": 2048, 00:11:59.574 "data_size": 63488 00:11:59.574 } 00:11:59.574 ] 00:11:59.574 }' 00:11:59.574 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:59.574 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:59.574 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:59.835 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:59.835 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:59.835 12:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.835 12:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.835 12:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.835 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:59.835 12:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.835 12:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.835 [2024-11-19 12:32:04.870318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:59.835 [2024-11-19 12:32:04.870414] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.835 [2024-11-19 12:32:04.870437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:59.835 [2024-11-19 12:32:04.870448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.835 [2024-11-19 12:32:04.870891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.835 [2024-11-19 12:32:04.870922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:59.835 [2024-11-19 12:32:04.871002] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:59.835 [2024-11-19 12:32:04.871022] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:59.835 [2024-11-19 12:32:04.871030] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:59.835 [2024-11-19 12:32:04.871043] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:59.835 BaseBdev1 00:11:59.835 12:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.835 12:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:00.773 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:00.773 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.773 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:00.773 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.773 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.774 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:00.774 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.774 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.774 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.774 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.774 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.774 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.774 12:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.774 12:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.774 12:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.774 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.774 "name": "raid_bdev1", 00:12:00.774 "uuid": "58f80d82-0178-4090-9209-c89ca6e93385", 00:12:00.774 "strip_size_kb": 0, 00:12:00.774 "state": "online", 00:12:00.774 "raid_level": "raid1", 00:12:00.774 "superblock": true, 00:12:00.774 "num_base_bdevs": 2, 00:12:00.774 "num_base_bdevs_discovered": 1, 00:12:00.774 "num_base_bdevs_operational": 1, 00:12:00.774 "base_bdevs_list": [ 00:12:00.774 { 00:12:00.774 "name": null, 00:12:00.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.774 "is_configured": false, 00:12:00.774 "data_offset": 0, 00:12:00.774 "data_size": 63488 00:12:00.774 }, 00:12:00.774 { 00:12:00.774 "name": "BaseBdev2", 00:12:00.774 "uuid": "bcf8fd58-f370-50e9-9c54-a6e18b20aec6", 00:12:00.774 "is_configured": true, 00:12:00.774 "data_offset": 2048, 00:12:00.774 "data_size": 63488 00:12:00.774 } 00:12:00.774 ] 00:12:00.774 }' 00:12:00.774 12:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.774 12:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.345 12:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:01.345 12:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:01.345 12:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:01.345 12:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:01.345 12:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:01.345 12:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.345 12:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.345 12:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.345 12:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.345 12:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.345 12:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:01.345 "name": "raid_bdev1", 00:12:01.345 "uuid": "58f80d82-0178-4090-9209-c89ca6e93385", 00:12:01.345 "strip_size_kb": 0, 00:12:01.345 "state": "online", 00:12:01.345 "raid_level": "raid1", 00:12:01.345 "superblock": true, 00:12:01.345 "num_base_bdevs": 2, 00:12:01.345 "num_base_bdevs_discovered": 1, 00:12:01.345 "num_base_bdevs_operational": 1, 00:12:01.345 "base_bdevs_list": [ 00:12:01.345 { 00:12:01.345 "name": null, 00:12:01.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.345 "is_configured": false, 00:12:01.345 "data_offset": 0, 00:12:01.345 "data_size": 63488 00:12:01.345 }, 00:12:01.345 { 00:12:01.345 "name": "BaseBdev2", 00:12:01.345 "uuid": "bcf8fd58-f370-50e9-9c54-a6e18b20aec6", 00:12:01.345 "is_configured": true, 00:12:01.345 "data_offset": 2048, 00:12:01.345 "data_size": 63488 00:12:01.345 } 00:12:01.345 ] 00:12:01.345 }' 00:12:01.345 12:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:01.345 12:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:01.345 12:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:01.345 12:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:01.345 12:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:01.345 12:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:12:01.345 12:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:01.345 12:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:01.345 12:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.345 12:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:01.345 12:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.345 12:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:01.345 12:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.345 12:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.345 [2024-11-19 12:32:06.443636] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:01.345 [2024-11-19 12:32:06.443830] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:01.345 [2024-11-19 12:32:06.443843] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:01.345 request: 00:12:01.345 { 00:12:01.345 "base_bdev": "BaseBdev1", 00:12:01.345 "raid_bdev": "raid_bdev1", 00:12:01.345 "method": "bdev_raid_add_base_bdev", 00:12:01.345 "req_id": 1 00:12:01.345 } 00:12:01.345 Got JSON-RPC error response 00:12:01.345 response: 00:12:01.345 { 00:12:01.345 "code": -22, 00:12:01.345 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:01.345 } 00:12:01.345 12:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:01.345 12:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:12:01.345 12:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:01.345 12:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:01.345 12:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:01.345 12:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:02.285 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:02.285 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.285 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.285 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.285 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.285 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:02.285 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.285 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.285 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.285 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.285 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.285 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.285 12:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.285 12:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.285 12:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.285 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.285 "name": "raid_bdev1", 00:12:02.285 "uuid": "58f80d82-0178-4090-9209-c89ca6e93385", 00:12:02.285 "strip_size_kb": 0, 00:12:02.285 "state": "online", 00:12:02.285 "raid_level": "raid1", 00:12:02.285 "superblock": true, 00:12:02.285 "num_base_bdevs": 2, 00:12:02.285 "num_base_bdevs_discovered": 1, 00:12:02.285 "num_base_bdevs_operational": 1, 00:12:02.285 "base_bdevs_list": [ 00:12:02.285 { 00:12:02.285 "name": null, 00:12:02.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.285 "is_configured": false, 00:12:02.285 "data_offset": 0, 00:12:02.285 "data_size": 63488 00:12:02.285 }, 00:12:02.285 { 00:12:02.285 "name": "BaseBdev2", 00:12:02.285 "uuid": "bcf8fd58-f370-50e9-9c54-a6e18b20aec6", 00:12:02.285 "is_configured": true, 00:12:02.285 "data_offset": 2048, 00:12:02.285 "data_size": 63488 00:12:02.285 } 00:12:02.285 ] 00:12:02.285 }' 00:12:02.285 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.285 12:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.855 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:02.855 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:02.855 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:02.855 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:02.855 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:02.855 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.855 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.855 12:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.855 12:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.855 12:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.855 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:02.855 "name": "raid_bdev1", 00:12:02.855 "uuid": "58f80d82-0178-4090-9209-c89ca6e93385", 00:12:02.855 "strip_size_kb": 0, 00:12:02.855 "state": "online", 00:12:02.855 "raid_level": "raid1", 00:12:02.855 "superblock": true, 00:12:02.855 "num_base_bdevs": 2, 00:12:02.855 "num_base_bdevs_discovered": 1, 00:12:02.855 "num_base_bdevs_operational": 1, 00:12:02.855 "base_bdevs_list": [ 00:12:02.855 { 00:12:02.855 "name": null, 00:12:02.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.855 "is_configured": false, 00:12:02.855 "data_offset": 0, 00:12:02.855 "data_size": 63488 00:12:02.855 }, 00:12:02.855 { 00:12:02.855 "name": "BaseBdev2", 00:12:02.855 "uuid": "bcf8fd58-f370-50e9-9c54-a6e18b20aec6", 00:12:02.855 "is_configured": true, 00:12:02.855 "data_offset": 2048, 00:12:02.855 "data_size": 63488 00:12:02.855 } 00:12:02.855 ] 00:12:02.855 }' 00:12:02.855 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:02.855 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:02.855 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:02.855 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:02.855 12:32:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 86607 00:12:02.855 12:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 86607 ']' 00:12:02.855 12:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 86607 00:12:02.855 12:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:02.855 12:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:02.855 12:32:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86607 00:12:02.855 12:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:02.855 killing process with pid 86607 00:12:02.855 Received shutdown signal, test time was about 60.000000 seconds 00:12:02.855 00:12:02.855 Latency(us) 00:12:02.855 [2024-11-19T12:32:08.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:02.855 [2024-11-19T12:32:08.116Z] =================================================================================================================== 00:12:02.855 [2024-11-19T12:32:08.116Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:02.855 12:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:02.855 12:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86607' 00:12:02.855 12:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 86607 00:12:02.855 [2024-11-19 12:32:08.027134] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:02.855 12:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 86607 00:12:02.855 [2024-11-19 12:32:08.027318] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:02.855 [2024-11-19 12:32:08.027376] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:02.855 [2024-11-19 12:32:08.027386] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:12:02.855 [2024-11-19 12:32:08.059456] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:03.115 12:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:03.115 00:12:03.115 real 0m22.202s 00:12:03.115 user 0m26.799s 00:12:03.115 sys 0m4.246s 00:12:03.115 12:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:03.115 ************************************ 00:12:03.115 END TEST raid_rebuild_test_sb 00:12:03.115 ************************************ 00:12:03.115 12:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.115 12:32:08 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:12:03.115 12:32:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:03.115 12:32:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:03.115 12:32:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:03.374 ************************************ 00:12:03.374 START TEST raid_rebuild_test_io 00:12:03.374 ************************************ 00:12:03.374 12:32:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:12:03.374 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:03.374 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:03.374 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:03.374 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:03.374 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:03.374 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:03.374 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:03.375 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:03.375 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:03.375 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:03.375 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:03.375 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:03.375 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:03.375 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:03.375 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:03.375 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:03.375 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:03.375 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:03.375 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:03.375 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:03.375 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:03.375 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:03.375 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:03.375 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87324 00:12:03.375 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:03.375 12:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87324 00:12:03.375 12:32:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 87324 ']' 00:12:03.375 12:32:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.375 12:32:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:03.375 12:32:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.375 12:32:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:03.375 12:32:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.375 [2024-11-19 12:32:08.476880] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:03.375 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:03.375 Zero copy mechanism will not be used. 00:12:03.375 [2024-11-19 12:32:08.477089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87324 ] 00:12:03.375 [2024-11-19 12:32:08.619740] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.635 [2024-11-19 12:32:08.673177] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.635 [2024-11-19 12:32:08.716581] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.635 [2024-11-19 12:32:08.716718] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.206 BaseBdev1_malloc 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.206 [2024-11-19 12:32:09.331700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:04.206 [2024-11-19 12:32:09.331790] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.206 [2024-11-19 12:32:09.331818] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:04.206 [2024-11-19 12:32:09.331835] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.206 [2024-11-19 12:32:09.333868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.206 [2024-11-19 12:32:09.333906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:04.206 BaseBdev1 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.206 BaseBdev2_malloc 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.206 [2024-11-19 12:32:09.370604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:04.206 [2024-11-19 12:32:09.370765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.206 [2024-11-19 12:32:09.370793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:04.206 [2024-11-19 12:32:09.370803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.206 [2024-11-19 12:32:09.373101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.206 [2024-11-19 12:32:09.373140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:04.206 BaseBdev2 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.206 spare_malloc 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.206 spare_delay 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.206 [2024-11-19 12:32:09.411369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:04.206 [2024-11-19 12:32:09.411459] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.206 [2024-11-19 12:32:09.411487] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:04.206 [2024-11-19 12:32:09.411496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.206 [2024-11-19 12:32:09.413716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.206 [2024-11-19 12:32:09.413770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:04.206 spare 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.206 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.206 [2024-11-19 12:32:09.423397] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:04.206 [2024-11-19 12:32:09.425366] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:04.206 [2024-11-19 12:32:09.425581] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:04.206 [2024-11-19 12:32:09.425599] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:04.206 [2024-11-19 12:32:09.425924] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:12:04.206 [2024-11-19 12:32:09.426058] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:04.206 [2024-11-19 12:32:09.426069] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:04.207 [2024-11-19 12:32:09.426248] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.207 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.207 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:04.207 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.207 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.207 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.207 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.207 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:04.207 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.207 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.207 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.207 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.207 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.207 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.207 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.207 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.207 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.467 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.467 "name": "raid_bdev1", 00:12:04.467 "uuid": "ea67b4d4-6b5f-4655-be6c-f02f84a311f7", 00:12:04.467 "strip_size_kb": 0, 00:12:04.467 "state": "online", 00:12:04.467 "raid_level": "raid1", 00:12:04.467 "superblock": false, 00:12:04.467 "num_base_bdevs": 2, 00:12:04.467 "num_base_bdevs_discovered": 2, 00:12:04.467 "num_base_bdevs_operational": 2, 00:12:04.467 "base_bdevs_list": [ 00:12:04.467 { 00:12:04.467 "name": "BaseBdev1", 00:12:04.467 "uuid": "9a7065c8-0f3f-5e79-831e-11f8ecb625f1", 00:12:04.467 "is_configured": true, 00:12:04.467 "data_offset": 0, 00:12:04.467 "data_size": 65536 00:12:04.467 }, 00:12:04.467 { 00:12:04.467 "name": "BaseBdev2", 00:12:04.467 "uuid": "035fdb23-aefa-51fc-9586-2fd4769c38e4", 00:12:04.467 "is_configured": true, 00:12:04.467 "data_offset": 0, 00:12:04.467 "data_size": 65536 00:12:04.467 } 00:12:04.467 ] 00:12:04.467 }' 00:12:04.467 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.467 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.728 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:04.728 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:04.728 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.728 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.728 [2024-11-19 12:32:09.895054] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:04.728 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.728 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:04.728 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:04.728 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.728 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.728 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.728 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.728 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:04.728 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:04.728 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:04.728 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:04.728 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.728 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.728 [2024-11-19 12:32:09.966539] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:04.728 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.728 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:04.728 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.728 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.728 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.728 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.728 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:04.728 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.728 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.728 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.728 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.728 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.728 12:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.728 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.728 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.989 12:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.989 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.989 "name": "raid_bdev1", 00:12:04.989 "uuid": "ea67b4d4-6b5f-4655-be6c-f02f84a311f7", 00:12:04.989 "strip_size_kb": 0, 00:12:04.989 "state": "online", 00:12:04.989 "raid_level": "raid1", 00:12:04.989 "superblock": false, 00:12:04.989 "num_base_bdevs": 2, 00:12:04.989 "num_base_bdevs_discovered": 1, 00:12:04.989 "num_base_bdevs_operational": 1, 00:12:04.989 "base_bdevs_list": [ 00:12:04.989 { 00:12:04.989 "name": null, 00:12:04.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.989 "is_configured": false, 00:12:04.989 "data_offset": 0, 00:12:04.989 "data_size": 65536 00:12:04.989 }, 00:12:04.989 { 00:12:04.989 "name": "BaseBdev2", 00:12:04.989 "uuid": "035fdb23-aefa-51fc-9586-2fd4769c38e4", 00:12:04.989 "is_configured": true, 00:12:04.989 "data_offset": 0, 00:12:04.989 "data_size": 65536 00:12:04.989 } 00:12:04.989 ] 00:12:04.989 }' 00:12:04.989 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.989 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.989 [2024-11-19 12:32:10.060481] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:04.989 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:04.989 Zero copy mechanism will not be used. 00:12:04.989 Running I/O for 60 seconds... 00:12:05.249 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:05.249 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.249 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.249 [2024-11-19 12:32:10.403167] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:05.249 12:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.249 12:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:05.249 [2024-11-19 12:32:10.434775] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:05.249 [2024-11-19 12:32:10.436826] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:05.509 [2024-11-19 12:32:10.549958] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:05.509 [2024-11-19 12:32:10.550512] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:05.509 [2024-11-19 12:32:10.670177] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:05.509 [2024-11-19 12:32:10.670512] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:05.770 [2024-11-19 12:32:11.019905] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:06.029 201.00 IOPS, 603.00 MiB/s [2024-11-19T12:32:11.290Z] [2024-11-19 12:32:11.241549] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:06.289 12:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:06.289 12:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:06.289 12:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:06.289 12:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:06.289 12:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:06.289 12:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.289 12:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.289 12:32:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.289 12:32:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.289 12:32:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.289 12:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:06.289 "name": "raid_bdev1", 00:12:06.289 "uuid": "ea67b4d4-6b5f-4655-be6c-f02f84a311f7", 00:12:06.289 "strip_size_kb": 0, 00:12:06.289 "state": "online", 00:12:06.289 "raid_level": "raid1", 00:12:06.289 "superblock": false, 00:12:06.289 "num_base_bdevs": 2, 00:12:06.289 "num_base_bdevs_discovered": 2, 00:12:06.289 "num_base_bdevs_operational": 2, 00:12:06.289 "process": { 00:12:06.289 "type": "rebuild", 00:12:06.289 "target": "spare", 00:12:06.289 "progress": { 00:12:06.289 "blocks": 12288, 00:12:06.289 "percent": 18 00:12:06.289 } 00:12:06.289 }, 00:12:06.289 "base_bdevs_list": [ 00:12:06.289 { 00:12:06.289 "name": "spare", 00:12:06.289 "uuid": "fc4fd9be-3ebe-516a-a315-ba69d6c827e6", 00:12:06.289 "is_configured": true, 00:12:06.289 "data_offset": 0, 00:12:06.289 "data_size": 65536 00:12:06.289 }, 00:12:06.289 { 00:12:06.289 "name": "BaseBdev2", 00:12:06.289 "uuid": "035fdb23-aefa-51fc-9586-2fd4769c38e4", 00:12:06.289 "is_configured": true, 00:12:06.289 "data_offset": 0, 00:12:06.289 "data_size": 65536 00:12:06.289 } 00:12:06.289 ] 00:12:06.289 }' 00:12:06.289 [2024-11-19 12:32:11.480570] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:06.289 12:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:06.289 12:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:06.289 12:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:06.550 12:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:06.550 12:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:06.550 12:32:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.550 12:32:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.550 [2024-11-19 12:32:11.560516] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:06.550 [2024-11-19 12:32:11.605186] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:06.550 [2024-11-19 12:32:11.605516] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:06.550 [2024-11-19 12:32:11.707322] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:06.550 [2024-11-19 12:32:11.721511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:06.550 [2024-11-19 12:32:11.721584] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:06.550 [2024-11-19 12:32:11.721625] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:06.550 [2024-11-19 12:32:11.733255] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:12:06.550 12:32:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.550 12:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:06.550 12:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:06.550 12:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:06.550 12:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.550 12:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.550 12:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:06.550 12:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.550 12:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.550 12:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.550 12:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.550 12:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.550 12:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.550 12:32:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.550 12:32:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.550 12:32:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.551 12:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.551 "name": "raid_bdev1", 00:12:06.551 "uuid": "ea67b4d4-6b5f-4655-be6c-f02f84a311f7", 00:12:06.551 "strip_size_kb": 0, 00:12:06.551 "state": "online", 00:12:06.551 "raid_level": "raid1", 00:12:06.551 "superblock": false, 00:12:06.551 "num_base_bdevs": 2, 00:12:06.551 "num_base_bdevs_discovered": 1, 00:12:06.551 "num_base_bdevs_operational": 1, 00:12:06.551 "base_bdevs_list": [ 00:12:06.551 { 00:12:06.551 "name": null, 00:12:06.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.551 "is_configured": false, 00:12:06.551 "data_offset": 0, 00:12:06.551 "data_size": 65536 00:12:06.551 }, 00:12:06.551 { 00:12:06.551 "name": "BaseBdev2", 00:12:06.551 "uuid": "035fdb23-aefa-51fc-9586-2fd4769c38e4", 00:12:06.551 "is_configured": true, 00:12:06.551 "data_offset": 0, 00:12:06.551 "data_size": 65536 00:12:06.551 } 00:12:06.551 ] 00:12:06.551 }' 00:12:06.551 12:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.551 12:32:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.071 173.50 IOPS, 520.50 MiB/s [2024-11-19T12:32:12.332Z] 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:07.071 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:07.071 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:07.071 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:07.071 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:07.071 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.071 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.071 12:32:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.071 12:32:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.071 12:32:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.071 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:07.071 "name": "raid_bdev1", 00:12:07.071 "uuid": "ea67b4d4-6b5f-4655-be6c-f02f84a311f7", 00:12:07.071 "strip_size_kb": 0, 00:12:07.071 "state": "online", 00:12:07.071 "raid_level": "raid1", 00:12:07.071 "superblock": false, 00:12:07.071 "num_base_bdevs": 2, 00:12:07.071 "num_base_bdevs_discovered": 1, 00:12:07.071 "num_base_bdevs_operational": 1, 00:12:07.071 "base_bdevs_list": [ 00:12:07.071 { 00:12:07.071 "name": null, 00:12:07.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.071 "is_configured": false, 00:12:07.071 "data_offset": 0, 00:12:07.071 "data_size": 65536 00:12:07.071 }, 00:12:07.071 { 00:12:07.071 "name": "BaseBdev2", 00:12:07.071 "uuid": "035fdb23-aefa-51fc-9586-2fd4769c38e4", 00:12:07.071 "is_configured": true, 00:12:07.071 "data_offset": 0, 00:12:07.071 "data_size": 65536 00:12:07.071 } 00:12:07.071 ] 00:12:07.071 }' 00:12:07.071 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:07.071 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:07.071 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:07.071 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:07.071 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:07.071 12:32:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.071 12:32:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.071 [2024-11-19 12:32:12.293732] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:07.071 12:32:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.071 12:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:07.331 [2024-11-19 12:32:12.354359] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:07.331 [2024-11-19 12:32:12.356438] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:07.331 [2024-11-19 12:32:12.469846] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:07.331 [2024-11-19 12:32:12.470421] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:07.591 [2024-11-19 12:32:12.690126] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:07.591 [2024-11-19 12:32:12.690438] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:07.851 [2024-11-19 12:32:13.020946] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:07.851 [2024-11-19 12:32:13.021496] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:08.111 160.67 IOPS, 482.00 MiB/s [2024-11-19T12:32:13.372Z] [2024-11-19 12:32:13.148090] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:08.111 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:08.111 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:08.111 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:08.111 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:08.111 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:08.111 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.111 12:32:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.111 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.111 12:32:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.111 12:32:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.371 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:08.371 "name": "raid_bdev1", 00:12:08.371 "uuid": "ea67b4d4-6b5f-4655-be6c-f02f84a311f7", 00:12:08.371 "strip_size_kb": 0, 00:12:08.371 "state": "online", 00:12:08.371 "raid_level": "raid1", 00:12:08.371 "superblock": false, 00:12:08.371 "num_base_bdevs": 2, 00:12:08.371 "num_base_bdevs_discovered": 2, 00:12:08.371 "num_base_bdevs_operational": 2, 00:12:08.371 "process": { 00:12:08.371 "type": "rebuild", 00:12:08.371 "target": "spare", 00:12:08.371 "progress": { 00:12:08.371 "blocks": 10240, 00:12:08.371 "percent": 15 00:12:08.371 } 00:12:08.371 }, 00:12:08.371 "base_bdevs_list": [ 00:12:08.371 { 00:12:08.371 "name": "spare", 00:12:08.371 "uuid": "fc4fd9be-3ebe-516a-a315-ba69d6c827e6", 00:12:08.371 "is_configured": true, 00:12:08.371 "data_offset": 0, 00:12:08.371 "data_size": 65536 00:12:08.371 }, 00:12:08.371 { 00:12:08.371 "name": "BaseBdev2", 00:12:08.371 "uuid": "035fdb23-aefa-51fc-9586-2fd4769c38e4", 00:12:08.371 "is_configured": true, 00:12:08.371 "data_offset": 0, 00:12:08.371 "data_size": 65536 00:12:08.371 } 00:12:08.371 ] 00:12:08.371 }' 00:12:08.371 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:08.371 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:08.371 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:08.371 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:08.371 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:08.371 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:08.371 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:08.371 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:08.371 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=326 00:12:08.371 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:08.371 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:08.371 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:08.371 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:08.371 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:08.371 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:08.371 [2024-11-19 12:32:13.460058] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:08.371 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.371 12:32:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.371 12:32:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.371 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.371 12:32:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.371 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:08.371 "name": "raid_bdev1", 00:12:08.371 "uuid": "ea67b4d4-6b5f-4655-be6c-f02f84a311f7", 00:12:08.371 "strip_size_kb": 0, 00:12:08.371 "state": "online", 00:12:08.371 "raid_level": "raid1", 00:12:08.371 "superblock": false, 00:12:08.371 "num_base_bdevs": 2, 00:12:08.371 "num_base_bdevs_discovered": 2, 00:12:08.371 "num_base_bdevs_operational": 2, 00:12:08.371 "process": { 00:12:08.371 "type": "rebuild", 00:12:08.371 "target": "spare", 00:12:08.371 "progress": { 00:12:08.371 "blocks": 14336, 00:12:08.371 "percent": 21 00:12:08.371 } 00:12:08.371 }, 00:12:08.371 "base_bdevs_list": [ 00:12:08.371 { 00:12:08.371 "name": "spare", 00:12:08.371 "uuid": "fc4fd9be-3ebe-516a-a315-ba69d6c827e6", 00:12:08.371 "is_configured": true, 00:12:08.371 "data_offset": 0, 00:12:08.371 "data_size": 65536 00:12:08.371 }, 00:12:08.371 { 00:12:08.371 "name": "BaseBdev2", 00:12:08.371 "uuid": "035fdb23-aefa-51fc-9586-2fd4769c38e4", 00:12:08.371 "is_configured": true, 00:12:08.371 "data_offset": 0, 00:12:08.371 "data_size": 65536 00:12:08.371 } 00:12:08.371 ] 00:12:08.371 }' 00:12:08.371 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:08.371 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:08.371 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:08.371 [2024-11-19 12:32:13.568952] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:08.371 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:08.371 12:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:08.941 [2024-11-19 12:32:13.903505] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:09.201 147.25 IOPS, 441.75 MiB/s [2024-11-19T12:32:14.462Z] [2024-11-19 12:32:14.318792] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:09.460 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:09.460 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:09.460 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:09.460 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:09.460 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:09.460 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:09.460 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.460 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.460 12:32:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.460 12:32:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.460 12:32:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.460 [2024-11-19 12:32:14.658515] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:09.460 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:09.460 "name": "raid_bdev1", 00:12:09.460 "uuid": "ea67b4d4-6b5f-4655-be6c-f02f84a311f7", 00:12:09.460 "strip_size_kb": 0, 00:12:09.460 "state": "online", 00:12:09.460 "raid_level": "raid1", 00:12:09.460 "superblock": false, 00:12:09.460 "num_base_bdevs": 2, 00:12:09.460 "num_base_bdevs_discovered": 2, 00:12:09.460 "num_base_bdevs_operational": 2, 00:12:09.460 "process": { 00:12:09.460 "type": "rebuild", 00:12:09.460 "target": "spare", 00:12:09.460 "progress": { 00:12:09.460 "blocks": 30720, 00:12:09.460 "percent": 46 00:12:09.460 } 00:12:09.460 }, 00:12:09.460 "base_bdevs_list": [ 00:12:09.460 { 00:12:09.460 "name": "spare", 00:12:09.460 "uuid": "fc4fd9be-3ebe-516a-a315-ba69d6c827e6", 00:12:09.460 "is_configured": true, 00:12:09.460 "data_offset": 0, 00:12:09.460 "data_size": 65536 00:12:09.460 }, 00:12:09.460 { 00:12:09.460 "name": "BaseBdev2", 00:12:09.460 "uuid": "035fdb23-aefa-51fc-9586-2fd4769c38e4", 00:12:09.460 "is_configured": true, 00:12:09.460 "data_offset": 0, 00:12:09.460 "data_size": 65536 00:12:09.460 } 00:12:09.460 ] 00:12:09.460 }' 00:12:09.460 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:09.460 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:09.460 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:09.720 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:09.720 12:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:09.720 [2024-11-19 12:32:14.884615] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:09.720 [2024-11-19 12:32:14.884888] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:09.980 128.20 IOPS, 384.60 MiB/s [2024-11-19T12:32:15.241Z] [2024-11-19 12:32:15.216857] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:10.264 [2024-11-19 12:32:15.323560] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:10.522 [2024-11-19 12:32:15.654697] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:10.522 [2024-11-19 12:32:15.655299] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:10.522 12:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:10.522 12:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:10.522 12:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:10.522 12:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:10.522 12:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:10.522 12:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:10.522 12:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.522 12:32:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.522 12:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.522 12:32:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.522 12:32:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.781 12:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:10.781 "name": "raid_bdev1", 00:12:10.781 "uuid": "ea67b4d4-6b5f-4655-be6c-f02f84a311f7", 00:12:10.781 "strip_size_kb": 0, 00:12:10.781 "state": "online", 00:12:10.781 "raid_level": "raid1", 00:12:10.781 "superblock": false, 00:12:10.781 "num_base_bdevs": 2, 00:12:10.781 "num_base_bdevs_discovered": 2, 00:12:10.781 "num_base_bdevs_operational": 2, 00:12:10.781 "process": { 00:12:10.781 "type": "rebuild", 00:12:10.781 "target": "spare", 00:12:10.781 "progress": { 00:12:10.781 "blocks": 45056, 00:12:10.781 "percent": 68 00:12:10.781 } 00:12:10.781 }, 00:12:10.781 "base_bdevs_list": [ 00:12:10.781 { 00:12:10.781 "name": "spare", 00:12:10.781 "uuid": "fc4fd9be-3ebe-516a-a315-ba69d6c827e6", 00:12:10.781 "is_configured": true, 00:12:10.781 "data_offset": 0, 00:12:10.781 "data_size": 65536 00:12:10.781 }, 00:12:10.781 { 00:12:10.781 "name": "BaseBdev2", 00:12:10.781 "uuid": "035fdb23-aefa-51fc-9586-2fd4769c38e4", 00:12:10.781 "is_configured": true, 00:12:10.781 "data_offset": 0, 00:12:10.781 "data_size": 65536 00:12:10.781 } 00:12:10.781 ] 00:12:10.781 }' 00:12:10.781 12:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:10.781 [2024-11-19 12:32:15.806156] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:10.781 12:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:10.781 12:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:10.781 12:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:10.781 12:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:11.040 114.83 IOPS, 344.50 MiB/s [2024-11-19T12:32:16.301Z] [2024-11-19 12:32:16.128813] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:12:11.040 [2024-11-19 12:32:16.129398] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:12:11.609 [2024-11-19 12:32:16.566346] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:11.609 [2024-11-19 12:32:16.566945] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:11.609 [2024-11-19 12:32:16.692732] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:11.868 12:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:11.868 12:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:11.868 12:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:11.868 12:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:11.868 12:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:11.868 12:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:11.868 12:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.868 12:32:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.868 12:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.868 12:32:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.868 12:32:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.868 12:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:11.868 "name": "raid_bdev1", 00:12:11.868 "uuid": "ea67b4d4-6b5f-4655-be6c-f02f84a311f7", 00:12:11.868 "strip_size_kb": 0, 00:12:11.868 "state": "online", 00:12:11.868 "raid_level": "raid1", 00:12:11.868 "superblock": false, 00:12:11.868 "num_base_bdevs": 2, 00:12:11.868 "num_base_bdevs_discovered": 2, 00:12:11.868 "num_base_bdevs_operational": 2, 00:12:11.868 "process": { 00:12:11.868 "type": "rebuild", 00:12:11.868 "target": "spare", 00:12:11.868 "progress": { 00:12:11.868 "blocks": 61440, 00:12:11.868 "percent": 93 00:12:11.868 } 00:12:11.868 }, 00:12:11.868 "base_bdevs_list": [ 00:12:11.868 { 00:12:11.868 "name": "spare", 00:12:11.868 "uuid": "fc4fd9be-3ebe-516a-a315-ba69d6c827e6", 00:12:11.868 "is_configured": true, 00:12:11.868 "data_offset": 0, 00:12:11.868 "data_size": 65536 00:12:11.868 }, 00:12:11.868 { 00:12:11.868 "name": "BaseBdev2", 00:12:11.868 "uuid": "035fdb23-aefa-51fc-9586-2fd4769c38e4", 00:12:11.868 "is_configured": true, 00:12:11.868 "data_offset": 0, 00:12:11.868 "data_size": 65536 00:12:11.868 } 00:12:11.868 ] 00:12:11.868 }' 00:12:11.868 12:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:11.868 12:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:11.868 12:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:11.869 [2024-11-19 12:32:17.027564] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:11.869 12:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:11.869 12:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:12.127 103.00 IOPS, 309.00 MiB/s [2024-11-19T12:32:17.388Z] [2024-11-19 12:32:17.134374] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:12.127 [2024-11-19 12:32:17.137138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.066 94.38 IOPS, 283.12 MiB/s [2024-11-19T12:32:18.327Z] 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:13.066 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:13.066 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.066 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:13.066 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:13.066 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.066 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.066 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.066 12:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.066 12:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.066 12:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.066 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.066 "name": "raid_bdev1", 00:12:13.066 "uuid": "ea67b4d4-6b5f-4655-be6c-f02f84a311f7", 00:12:13.066 "strip_size_kb": 0, 00:12:13.066 "state": "online", 00:12:13.066 "raid_level": "raid1", 00:12:13.066 "superblock": false, 00:12:13.066 "num_base_bdevs": 2, 00:12:13.066 "num_base_bdevs_discovered": 2, 00:12:13.066 "num_base_bdevs_operational": 2, 00:12:13.066 "base_bdevs_list": [ 00:12:13.066 { 00:12:13.066 "name": "spare", 00:12:13.066 "uuid": "fc4fd9be-3ebe-516a-a315-ba69d6c827e6", 00:12:13.066 "is_configured": true, 00:12:13.066 "data_offset": 0, 00:12:13.066 "data_size": 65536 00:12:13.066 }, 00:12:13.066 { 00:12:13.066 "name": "BaseBdev2", 00:12:13.066 "uuid": "035fdb23-aefa-51fc-9586-2fd4769c38e4", 00:12:13.066 "is_configured": true, 00:12:13.066 "data_offset": 0, 00:12:13.066 "data_size": 65536 00:12:13.066 } 00:12:13.066 ] 00:12:13.066 }' 00:12:13.066 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.066 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:13.066 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.066 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:13.066 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:13.066 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:13.066 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.066 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:13.066 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:13.066 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.066 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.066 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.066 12:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.066 12:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.066 12:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.066 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.066 "name": "raid_bdev1", 00:12:13.066 "uuid": "ea67b4d4-6b5f-4655-be6c-f02f84a311f7", 00:12:13.066 "strip_size_kb": 0, 00:12:13.066 "state": "online", 00:12:13.066 "raid_level": "raid1", 00:12:13.066 "superblock": false, 00:12:13.066 "num_base_bdevs": 2, 00:12:13.066 "num_base_bdevs_discovered": 2, 00:12:13.066 "num_base_bdevs_operational": 2, 00:12:13.066 "base_bdevs_list": [ 00:12:13.066 { 00:12:13.066 "name": "spare", 00:12:13.066 "uuid": "fc4fd9be-3ebe-516a-a315-ba69d6c827e6", 00:12:13.066 "is_configured": true, 00:12:13.066 "data_offset": 0, 00:12:13.066 "data_size": 65536 00:12:13.066 }, 00:12:13.066 { 00:12:13.066 "name": "BaseBdev2", 00:12:13.066 "uuid": "035fdb23-aefa-51fc-9586-2fd4769c38e4", 00:12:13.066 "is_configured": true, 00:12:13.066 "data_offset": 0, 00:12:13.066 "data_size": 65536 00:12:13.066 } 00:12:13.066 ] 00:12:13.066 }' 00:12:13.066 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.066 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:13.066 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.326 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:13.326 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:13.326 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.326 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.326 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.326 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.326 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:13.326 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.326 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.326 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.326 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.326 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.326 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.326 12:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.326 12:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.326 12:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.326 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.326 "name": "raid_bdev1", 00:12:13.326 "uuid": "ea67b4d4-6b5f-4655-be6c-f02f84a311f7", 00:12:13.326 "strip_size_kb": 0, 00:12:13.326 "state": "online", 00:12:13.326 "raid_level": "raid1", 00:12:13.326 "superblock": false, 00:12:13.326 "num_base_bdevs": 2, 00:12:13.326 "num_base_bdevs_discovered": 2, 00:12:13.326 "num_base_bdevs_operational": 2, 00:12:13.326 "base_bdevs_list": [ 00:12:13.326 { 00:12:13.326 "name": "spare", 00:12:13.326 "uuid": "fc4fd9be-3ebe-516a-a315-ba69d6c827e6", 00:12:13.326 "is_configured": true, 00:12:13.326 "data_offset": 0, 00:12:13.326 "data_size": 65536 00:12:13.326 }, 00:12:13.326 { 00:12:13.326 "name": "BaseBdev2", 00:12:13.326 "uuid": "035fdb23-aefa-51fc-9586-2fd4769c38e4", 00:12:13.326 "is_configured": true, 00:12:13.326 "data_offset": 0, 00:12:13.326 "data_size": 65536 00:12:13.326 } 00:12:13.326 ] 00:12:13.326 }' 00:12:13.326 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.326 12:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.586 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:13.586 12:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.586 12:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.586 [2024-11-19 12:32:18.785544] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:13.586 [2024-11-19 12:32:18.785589] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:13.846 00:12:13.846 Latency(us) 00:12:13.846 [2024-11-19T12:32:19.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:13.846 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:13.846 raid_bdev1 : 8.82 88.55 265.66 0.00 0.00 16169.54 275.45 111726.00 00:12:13.846 [2024-11-19T12:32:19.107Z] =================================================================================================================== 00:12:13.846 [2024-11-19T12:32:19.107Z] Total : 88.55 265.66 0.00 0.00 16169.54 275.45 111726.00 00:12:13.846 [2024-11-19 12:32:18.869299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.846 [2024-11-19 12:32:18.869358] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:13.846 [2024-11-19 12:32:18.869452] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:13.846 [2024-11-19 12:32:18.869471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:13.846 { 00:12:13.846 "results": [ 00:12:13.846 { 00:12:13.846 "job": "raid_bdev1", 00:12:13.846 "core_mask": "0x1", 00:12:13.846 "workload": "randrw", 00:12:13.846 "percentage": 50, 00:12:13.846 "status": "finished", 00:12:13.846 "queue_depth": 2, 00:12:13.846 "io_size": 3145728, 00:12:13.846 "runtime": 8.819674, 00:12:13.846 "iops": 88.55202584585327, 00:12:13.846 "mibps": 265.6560775375598, 00:12:13.846 "io_failed": 0, 00:12:13.846 "io_timeout": 0, 00:12:13.846 "avg_latency_us": 16169.535355523376, 00:12:13.846 "min_latency_us": 275.45152838427947, 00:12:13.846 "max_latency_us": 111726.00174672488 00:12:13.846 } 00:12:13.846 ], 00:12:13.846 "core_count": 1 00:12:13.846 } 00:12:13.846 12:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.846 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.846 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:13.846 12:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.846 12:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.846 12:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.846 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:13.846 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:13.846 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:13.846 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:13.846 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:13.846 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:13.846 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:13.846 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:13.846 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:13.846 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:13.846 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:13.846 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:13.846 12:32:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:14.107 /dev/nbd0 00:12:14.107 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:14.107 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:14.107 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:14.107 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:14.107 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:14.107 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:14.107 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:14.107 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:14.107 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:14.107 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:14.107 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:14.107 1+0 records in 00:12:14.107 1+0 records out 00:12:14.107 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463004 s, 8.8 MB/s 00:12:14.107 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.107 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:14.107 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.107 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:14.107 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:14.107 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:14.107 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:14.107 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:14.107 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:14.107 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:14.107 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:14.107 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:14.107 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:14.107 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:14.107 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:14.107 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:14.107 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:14.107 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:14.107 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:14.368 /dev/nbd1 00:12:14.369 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:14.369 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:14.369 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:14.369 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:14.369 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:14.369 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:14.369 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:14.369 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:14.369 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:14.369 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:14.369 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:14.369 1+0 records in 00:12:14.369 1+0 records out 00:12:14.369 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437212 s, 9.4 MB/s 00:12:14.369 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.369 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:14.369 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.369 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:14.369 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:14.369 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:14.369 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:14.369 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:14.369 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:14.369 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:14.369 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:14.369 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:14.369 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:14.369 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:14.369 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:14.628 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:14.628 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:14.628 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:14.628 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:14.628 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:14.628 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:14.628 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:14.628 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:14.628 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:14.628 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:14.628 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:14.628 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:14.628 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:14.628 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:14.628 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:14.888 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:14.888 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:14.888 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:14.888 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:14.888 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:14.888 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:14.888 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:14.888 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:14.888 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:14.888 12:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 87324 00:12:14.888 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 87324 ']' 00:12:14.888 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 87324 00:12:14.888 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:12:14.888 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:14.888 12:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87324 00:12:14.888 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:14.888 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:14.888 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87324' 00:12:14.888 killing process with pid 87324 00:12:14.888 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 87324 00:12:14.888 Received shutdown signal, test time was about 9.970528 seconds 00:12:14.888 00:12:14.888 Latency(us) 00:12:14.888 [2024-11-19T12:32:20.149Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:14.888 [2024-11-19T12:32:20.149Z] =================================================================================================================== 00:12:14.888 [2024-11-19T12:32:20.150Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:14.889 [2024-11-19 12:32:20.014022] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:14.889 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 87324 00:12:14.889 [2024-11-19 12:32:20.042221] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:15.149 12:32:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:15.149 00:12:15.149 real 0m11.918s 00:12:15.149 user 0m15.054s 00:12:15.149 sys 0m1.544s 00:12:15.149 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:15.149 12:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.149 ************************************ 00:12:15.149 END TEST raid_rebuild_test_io 00:12:15.149 ************************************ 00:12:15.149 12:32:20 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:12:15.149 12:32:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:15.149 12:32:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:15.149 12:32:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:15.149 ************************************ 00:12:15.149 START TEST raid_rebuild_test_sb_io 00:12:15.149 ************************************ 00:12:15.149 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:12:15.149 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:15.149 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:15.149 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:15.149 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:15.149 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:15.149 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:15.149 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:15.149 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:15.149 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:15.149 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:15.149 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:15.149 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:15.149 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:15.149 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:15.149 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:15.149 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:15.149 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:15.149 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:15.149 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:15.150 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:15.150 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:15.150 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:15.150 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:15.150 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:15.150 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:15.150 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87711 00:12:15.150 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87711 00:12:15.150 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 87711 ']' 00:12:15.150 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.150 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:15.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.150 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.150 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:15.150 12:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.410 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:15.410 Zero copy mechanism will not be used. 00:12:15.410 [2024-11-19 12:32:20.457707] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:15.410 [2024-11-19 12:32:20.457877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87711 ] 00:12:15.410 [2024-11-19 12:32:20.602142] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.410 [2024-11-19 12:32:20.655596] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.669 [2024-11-19 12:32:20.698566] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.669 [2024-11-19 12:32:20.698606] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:16.238 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:16.238 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:12:16.238 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.239 BaseBdev1_malloc 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.239 [2024-11-19 12:32:21.330362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:16.239 [2024-11-19 12:32:21.330457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.239 [2024-11-19 12:32:21.330494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:16.239 [2024-11-19 12:32:21.330513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.239 [2024-11-19 12:32:21.333160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.239 [2024-11-19 12:32:21.333209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:16.239 BaseBdev1 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.239 BaseBdev2_malloc 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.239 [2024-11-19 12:32:21.371505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:16.239 [2024-11-19 12:32:21.371596] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.239 [2024-11-19 12:32:21.371632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:16.239 [2024-11-19 12:32:21.371653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.239 [2024-11-19 12:32:21.375211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.239 [2024-11-19 12:32:21.375259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:16.239 BaseBdev2 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.239 spare_malloc 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.239 spare_delay 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.239 [2024-11-19 12:32:21.413027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:16.239 [2024-11-19 12:32:21.413113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.239 [2024-11-19 12:32:21.413141] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:16.239 [2024-11-19 12:32:21.413152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.239 [2024-11-19 12:32:21.415759] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.239 [2024-11-19 12:32:21.415803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:16.239 spare 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.239 [2024-11-19 12:32:21.425055] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:16.239 [2024-11-19 12:32:21.427280] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:16.239 [2024-11-19 12:32:21.427476] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:16.239 [2024-11-19 12:32:21.427492] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:16.239 [2024-11-19 12:32:21.427845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:12:16.239 [2024-11-19 12:32:21.428029] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:16.239 [2024-11-19 12:32:21.428052] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:16.239 [2024-11-19 12:32:21.428233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.239 "name": "raid_bdev1", 00:12:16.239 "uuid": "e0a31df1-c017-40e1-9e88-2876fb7c70cb", 00:12:16.239 "strip_size_kb": 0, 00:12:16.239 "state": "online", 00:12:16.239 "raid_level": "raid1", 00:12:16.239 "superblock": true, 00:12:16.239 "num_base_bdevs": 2, 00:12:16.239 "num_base_bdevs_discovered": 2, 00:12:16.239 "num_base_bdevs_operational": 2, 00:12:16.239 "base_bdevs_list": [ 00:12:16.239 { 00:12:16.239 "name": "BaseBdev1", 00:12:16.239 "uuid": "3fc5b0d5-0450-5441-98e4-bd0c5ce68b32", 00:12:16.239 "is_configured": true, 00:12:16.239 "data_offset": 2048, 00:12:16.239 "data_size": 63488 00:12:16.239 }, 00:12:16.239 { 00:12:16.239 "name": "BaseBdev2", 00:12:16.239 "uuid": "8610193f-b925-5734-951b-f7d7ac27b212", 00:12:16.239 "is_configured": true, 00:12:16.239 "data_offset": 2048, 00:12:16.239 "data_size": 63488 00:12:16.239 } 00:12:16.239 ] 00:12:16.239 }' 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.239 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.806 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:16.806 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:16.806 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.806 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.806 [2024-11-19 12:32:21.892700] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:16.806 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.806 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:16.806 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.806 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:16.806 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.806 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.806 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.806 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:16.806 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:16.806 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:16.806 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:16.806 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.806 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.806 [2024-11-19 12:32:21.988206] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:16.806 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.806 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:16.806 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.806 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.806 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.806 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.806 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:16.806 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.806 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.806 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.806 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.806 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.807 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.807 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.807 12:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.807 12:32:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.807 12:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.807 "name": "raid_bdev1", 00:12:16.807 "uuid": "e0a31df1-c017-40e1-9e88-2876fb7c70cb", 00:12:16.807 "strip_size_kb": 0, 00:12:16.807 "state": "online", 00:12:16.807 "raid_level": "raid1", 00:12:16.807 "superblock": true, 00:12:16.807 "num_base_bdevs": 2, 00:12:16.807 "num_base_bdevs_discovered": 1, 00:12:16.807 "num_base_bdevs_operational": 1, 00:12:16.807 "base_bdevs_list": [ 00:12:16.807 { 00:12:16.807 "name": null, 00:12:16.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.807 "is_configured": false, 00:12:16.807 "data_offset": 0, 00:12:16.807 "data_size": 63488 00:12:16.807 }, 00:12:16.807 { 00:12:16.807 "name": "BaseBdev2", 00:12:16.807 "uuid": "8610193f-b925-5734-951b-f7d7ac27b212", 00:12:16.807 "is_configured": true, 00:12:16.807 "data_offset": 2048, 00:12:16.807 "data_size": 63488 00:12:16.807 } 00:12:16.807 ] 00:12:16.807 }' 00:12:16.807 12:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.807 12:32:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.066 [2024-11-19 12:32:22.086151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:17.066 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:17.066 Zero copy mechanism will not be used. 00:12:17.066 Running I/O for 60 seconds... 00:12:17.326 12:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:17.326 12:32:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.326 12:32:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.326 [2024-11-19 12:32:22.440554] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:17.326 12:32:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.326 12:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:17.326 [2024-11-19 12:32:22.487416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:17.326 [2024-11-19 12:32:22.489830] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:17.587 [2024-11-19 12:32:22.628530] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:17.847 [2024-11-19 12:32:22.852885] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:17.847 [2024-11-19 12:32:22.853229] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:18.108 158.00 IOPS, 474.00 MiB/s [2024-11-19T12:32:23.369Z] [2024-11-19 12:32:23.196892] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:18.369 [2024-11-19 12:32:23.429289] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:18.369 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:18.369 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:18.369 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:18.369 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:18.369 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:18.369 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.369 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.369 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.369 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.369 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.369 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:18.369 "name": "raid_bdev1", 00:12:18.369 "uuid": "e0a31df1-c017-40e1-9e88-2876fb7c70cb", 00:12:18.369 "strip_size_kb": 0, 00:12:18.369 "state": "online", 00:12:18.369 "raid_level": "raid1", 00:12:18.369 "superblock": true, 00:12:18.369 "num_base_bdevs": 2, 00:12:18.369 "num_base_bdevs_discovered": 2, 00:12:18.369 "num_base_bdevs_operational": 2, 00:12:18.369 "process": { 00:12:18.369 "type": "rebuild", 00:12:18.369 "target": "spare", 00:12:18.369 "progress": { 00:12:18.369 "blocks": 10240, 00:12:18.369 "percent": 16 00:12:18.369 } 00:12:18.369 }, 00:12:18.369 "base_bdevs_list": [ 00:12:18.369 { 00:12:18.369 "name": "spare", 00:12:18.369 "uuid": "2c392439-d606-50aa-a97e-87213673d4f1", 00:12:18.369 "is_configured": true, 00:12:18.369 "data_offset": 2048, 00:12:18.369 "data_size": 63488 00:12:18.369 }, 00:12:18.369 { 00:12:18.369 "name": "BaseBdev2", 00:12:18.369 "uuid": "8610193f-b925-5734-951b-f7d7ac27b212", 00:12:18.369 "is_configured": true, 00:12:18.369 "data_offset": 2048, 00:12:18.369 "data_size": 63488 00:12:18.369 } 00:12:18.369 ] 00:12:18.369 }' 00:12:18.369 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:18.369 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:18.369 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:18.369 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:18.369 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:18.369 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.369 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.629 [2024-11-19 12:32:23.632954] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:18.629 [2024-11-19 12:32:23.675735] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:18.629 [2024-11-19 12:32:23.684987] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.629 [2024-11-19 12:32:23.685042] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:18.630 [2024-11-19 12:32:23.685062] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:18.630 [2024-11-19 12:32:23.705679] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:12:18.630 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.630 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:18.630 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.630 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.630 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.630 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.630 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:18.630 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.630 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.630 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.630 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.630 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.630 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.630 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.630 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.630 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.630 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.630 "name": "raid_bdev1", 00:12:18.630 "uuid": "e0a31df1-c017-40e1-9e88-2876fb7c70cb", 00:12:18.630 "strip_size_kb": 0, 00:12:18.630 "state": "online", 00:12:18.630 "raid_level": "raid1", 00:12:18.630 "superblock": true, 00:12:18.630 "num_base_bdevs": 2, 00:12:18.630 "num_base_bdevs_discovered": 1, 00:12:18.630 "num_base_bdevs_operational": 1, 00:12:18.630 "base_bdevs_list": [ 00:12:18.630 { 00:12:18.630 "name": null, 00:12:18.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.630 "is_configured": false, 00:12:18.630 "data_offset": 0, 00:12:18.630 "data_size": 63488 00:12:18.630 }, 00:12:18.630 { 00:12:18.630 "name": "BaseBdev2", 00:12:18.630 "uuid": "8610193f-b925-5734-951b-f7d7ac27b212", 00:12:18.630 "is_configured": true, 00:12:18.630 "data_offset": 2048, 00:12:18.630 "data_size": 63488 00:12:18.630 } 00:12:18.630 ] 00:12:18.630 }' 00:12:18.630 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.630 12:32:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.150 151.00 IOPS, 453.00 MiB/s [2024-11-19T12:32:24.411Z] 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:19.150 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:19.150 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:19.150 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:19.150 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:19.150 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.150 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.150 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.150 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.150 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.150 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:19.150 "name": "raid_bdev1", 00:12:19.150 "uuid": "e0a31df1-c017-40e1-9e88-2876fb7c70cb", 00:12:19.150 "strip_size_kb": 0, 00:12:19.150 "state": "online", 00:12:19.150 "raid_level": "raid1", 00:12:19.150 "superblock": true, 00:12:19.150 "num_base_bdevs": 2, 00:12:19.150 "num_base_bdevs_discovered": 1, 00:12:19.150 "num_base_bdevs_operational": 1, 00:12:19.150 "base_bdevs_list": [ 00:12:19.150 { 00:12:19.150 "name": null, 00:12:19.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.150 "is_configured": false, 00:12:19.150 "data_offset": 0, 00:12:19.150 "data_size": 63488 00:12:19.150 }, 00:12:19.150 { 00:12:19.150 "name": "BaseBdev2", 00:12:19.150 "uuid": "8610193f-b925-5734-951b-f7d7ac27b212", 00:12:19.150 "is_configured": true, 00:12:19.150 "data_offset": 2048, 00:12:19.150 "data_size": 63488 00:12:19.150 } 00:12:19.150 ] 00:12:19.150 }' 00:12:19.150 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:19.150 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:19.150 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:19.150 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:19.150 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:19.150 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.150 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.150 [2024-11-19 12:32:24.317874] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:19.150 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.150 12:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:19.150 [2024-11-19 12:32:24.377242] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:19.150 [2024-11-19 12:32:24.379572] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:19.410 [2024-11-19 12:32:24.488924] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:19.410 [2024-11-19 12:32:24.489518] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:19.768 [2024-11-19 12:32:24.700086] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:19.768 [2024-11-19 12:32:24.700441] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:20.028 [2024-11-19 12:32:25.052486] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:20.028 141.67 IOPS, 425.00 MiB/s [2024-11-19T12:32:25.289Z] [2024-11-19 12:32:25.256947] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:20.028 [2024-11-19 12:32:25.257291] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:20.288 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:20.288 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:20.289 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:20.289 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:20.289 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:20.289 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.289 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.289 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.289 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.289 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.289 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:20.289 "name": "raid_bdev1", 00:12:20.289 "uuid": "e0a31df1-c017-40e1-9e88-2876fb7c70cb", 00:12:20.289 "strip_size_kb": 0, 00:12:20.289 "state": "online", 00:12:20.289 "raid_level": "raid1", 00:12:20.289 "superblock": true, 00:12:20.289 "num_base_bdevs": 2, 00:12:20.289 "num_base_bdevs_discovered": 2, 00:12:20.289 "num_base_bdevs_operational": 2, 00:12:20.289 "process": { 00:12:20.289 "type": "rebuild", 00:12:20.289 "target": "spare", 00:12:20.289 "progress": { 00:12:20.289 "blocks": 10240, 00:12:20.289 "percent": 16 00:12:20.289 } 00:12:20.289 }, 00:12:20.289 "base_bdevs_list": [ 00:12:20.289 { 00:12:20.289 "name": "spare", 00:12:20.289 "uuid": "2c392439-d606-50aa-a97e-87213673d4f1", 00:12:20.289 "is_configured": true, 00:12:20.289 "data_offset": 2048, 00:12:20.289 "data_size": 63488 00:12:20.289 }, 00:12:20.289 { 00:12:20.289 "name": "BaseBdev2", 00:12:20.289 "uuid": "8610193f-b925-5734-951b-f7d7ac27b212", 00:12:20.289 "is_configured": true, 00:12:20.289 "data_offset": 2048, 00:12:20.289 "data_size": 63488 00:12:20.289 } 00:12:20.289 ] 00:12:20.289 }' 00:12:20.289 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:20.289 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:20.289 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:20.289 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:20.289 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:20.289 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:20.289 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:20.289 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:20.289 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:20.289 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:20.289 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=338 00:12:20.289 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:20.289 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:20.289 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:20.289 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:20.289 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:20.289 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:20.289 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.289 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.289 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.289 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.289 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.549 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:20.549 "name": "raid_bdev1", 00:12:20.549 "uuid": "e0a31df1-c017-40e1-9e88-2876fb7c70cb", 00:12:20.549 "strip_size_kb": 0, 00:12:20.549 "state": "online", 00:12:20.549 "raid_level": "raid1", 00:12:20.549 "superblock": true, 00:12:20.549 "num_base_bdevs": 2, 00:12:20.549 "num_base_bdevs_discovered": 2, 00:12:20.549 "num_base_bdevs_operational": 2, 00:12:20.549 "process": { 00:12:20.549 "type": "rebuild", 00:12:20.549 "target": "spare", 00:12:20.549 "progress": { 00:12:20.549 "blocks": 14336, 00:12:20.549 "percent": 22 00:12:20.549 } 00:12:20.549 }, 00:12:20.549 "base_bdevs_list": [ 00:12:20.549 { 00:12:20.549 "name": "spare", 00:12:20.549 "uuid": "2c392439-d606-50aa-a97e-87213673d4f1", 00:12:20.549 "is_configured": true, 00:12:20.549 "data_offset": 2048, 00:12:20.549 "data_size": 63488 00:12:20.549 }, 00:12:20.549 { 00:12:20.549 "name": "BaseBdev2", 00:12:20.549 "uuid": "8610193f-b925-5734-951b-f7d7ac27b212", 00:12:20.549 "is_configured": true, 00:12:20.549 "data_offset": 2048, 00:12:20.549 "data_size": 63488 00:12:20.549 } 00:12:20.549 ] 00:12:20.549 }' 00:12:20.549 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:20.549 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:20.549 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:20.549 [2024-11-19 12:32:25.619399] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:20.549 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:20.549 12:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:20.809 [2024-11-19 12:32:25.972395] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:21.069 121.25 IOPS, 363.75 MiB/s [2024-11-19T12:32:26.330Z] [2024-11-19 12:32:26.206684] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:21.329 [2024-11-19 12:32:26.434242] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:21.589 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:21.589 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:21.589 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:21.589 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:21.590 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:21.590 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:21.590 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.590 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.590 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.590 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.590 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.590 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:21.590 "name": "raid_bdev1", 00:12:21.590 "uuid": "e0a31df1-c017-40e1-9e88-2876fb7c70cb", 00:12:21.590 "strip_size_kb": 0, 00:12:21.590 "state": "online", 00:12:21.590 "raid_level": "raid1", 00:12:21.590 "superblock": true, 00:12:21.590 "num_base_bdevs": 2, 00:12:21.590 "num_base_bdevs_discovered": 2, 00:12:21.590 "num_base_bdevs_operational": 2, 00:12:21.590 "process": { 00:12:21.590 "type": "rebuild", 00:12:21.590 "target": "spare", 00:12:21.590 "progress": { 00:12:21.590 "blocks": 28672, 00:12:21.590 "percent": 45 00:12:21.590 } 00:12:21.590 }, 00:12:21.590 "base_bdevs_list": [ 00:12:21.590 { 00:12:21.590 "name": "spare", 00:12:21.590 "uuid": "2c392439-d606-50aa-a97e-87213673d4f1", 00:12:21.590 "is_configured": true, 00:12:21.590 "data_offset": 2048, 00:12:21.590 "data_size": 63488 00:12:21.590 }, 00:12:21.590 { 00:12:21.590 "name": "BaseBdev2", 00:12:21.590 "uuid": "8610193f-b925-5734-951b-f7d7ac27b212", 00:12:21.590 "is_configured": true, 00:12:21.590 "data_offset": 2048, 00:12:21.590 "data_size": 63488 00:12:21.590 } 00:12:21.590 ] 00:12:21.590 }' 00:12:21.590 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:21.590 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:21.590 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:21.590 [2024-11-19 12:32:26.807828] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:21.590 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:21.590 12:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:22.419 109.40 IOPS, 328.20 MiB/s [2024-11-19T12:32:27.680Z] [2024-11-19 12:32:27.468923] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:22.679 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:22.679 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:22.679 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:22.679 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:22.679 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:22.679 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:22.679 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.679 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.679 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.679 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.679 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.679 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:22.679 "name": "raid_bdev1", 00:12:22.679 "uuid": "e0a31df1-c017-40e1-9e88-2876fb7c70cb", 00:12:22.679 "strip_size_kb": 0, 00:12:22.679 "state": "online", 00:12:22.679 "raid_level": "raid1", 00:12:22.679 "superblock": true, 00:12:22.679 "num_base_bdevs": 2, 00:12:22.679 "num_base_bdevs_discovered": 2, 00:12:22.679 "num_base_bdevs_operational": 2, 00:12:22.679 "process": { 00:12:22.679 "type": "rebuild", 00:12:22.679 "target": "spare", 00:12:22.679 "progress": { 00:12:22.679 "blocks": 49152, 00:12:22.679 "percent": 77 00:12:22.679 } 00:12:22.679 }, 00:12:22.679 "base_bdevs_list": [ 00:12:22.679 { 00:12:22.679 "name": "spare", 00:12:22.679 "uuid": "2c392439-d606-50aa-a97e-87213673d4f1", 00:12:22.679 "is_configured": true, 00:12:22.679 "data_offset": 2048, 00:12:22.679 "data_size": 63488 00:12:22.679 }, 00:12:22.679 { 00:12:22.679 "name": "BaseBdev2", 00:12:22.679 "uuid": "8610193f-b925-5734-951b-f7d7ac27b212", 00:12:22.679 "is_configured": true, 00:12:22.679 "data_offset": 2048, 00:12:22.679 "data_size": 63488 00:12:22.679 } 00:12:22.679 ] 00:12:22.679 }' 00:12:22.679 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:22.679 [2024-11-19 12:32:27.909379] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:12:22.679 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:22.680 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:22.939 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:22.939 12:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:22.939 97.17 IOPS, 291.50 MiB/s [2024-11-19T12:32:28.200Z] [2024-11-19 12:32:28.123497] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:12:23.199 [2024-11-19 12:32:28.439338] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:23.770 [2024-11-19 12:32:28.866377] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:23.770 [2024-11-19 12:32:28.966252] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:23.770 [2024-11-19 12:32:28.967928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.770 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:23.770 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:23.770 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:23.770 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:23.770 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:23.770 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:23.770 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.770 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.770 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.770 12:32:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.770 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.039 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:24.039 "name": "raid_bdev1", 00:12:24.039 "uuid": "e0a31df1-c017-40e1-9e88-2876fb7c70cb", 00:12:24.039 "strip_size_kb": 0, 00:12:24.039 "state": "online", 00:12:24.039 "raid_level": "raid1", 00:12:24.039 "superblock": true, 00:12:24.039 "num_base_bdevs": 2, 00:12:24.039 "num_base_bdevs_discovered": 2, 00:12:24.039 "num_base_bdevs_operational": 2, 00:12:24.039 "base_bdevs_list": [ 00:12:24.039 { 00:12:24.039 "name": "spare", 00:12:24.039 "uuid": "2c392439-d606-50aa-a97e-87213673d4f1", 00:12:24.039 "is_configured": true, 00:12:24.039 "data_offset": 2048, 00:12:24.039 "data_size": 63488 00:12:24.039 }, 00:12:24.039 { 00:12:24.039 "name": "BaseBdev2", 00:12:24.039 "uuid": "8610193f-b925-5734-951b-f7d7ac27b212", 00:12:24.039 "is_configured": true, 00:12:24.039 "data_offset": 2048, 00:12:24.039 "data_size": 63488 00:12:24.039 } 00:12:24.039 ] 00:12:24.039 }' 00:12:24.039 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:24.039 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:24.039 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:24.039 87.57 IOPS, 262.71 MiB/s [2024-11-19T12:32:29.300Z] 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:24.040 "name": "raid_bdev1", 00:12:24.040 "uuid": "e0a31df1-c017-40e1-9e88-2876fb7c70cb", 00:12:24.040 "strip_size_kb": 0, 00:12:24.040 "state": "online", 00:12:24.040 "raid_level": "raid1", 00:12:24.040 "superblock": true, 00:12:24.040 "num_base_bdevs": 2, 00:12:24.040 "num_base_bdevs_discovered": 2, 00:12:24.040 "num_base_bdevs_operational": 2, 00:12:24.040 "base_bdevs_list": [ 00:12:24.040 { 00:12:24.040 "name": "spare", 00:12:24.040 "uuid": "2c392439-d606-50aa-a97e-87213673d4f1", 00:12:24.040 "is_configured": true, 00:12:24.040 "data_offset": 2048, 00:12:24.040 "data_size": 63488 00:12:24.040 }, 00:12:24.040 { 00:12:24.040 "name": "BaseBdev2", 00:12:24.040 "uuid": "8610193f-b925-5734-951b-f7d7ac27b212", 00:12:24.040 "is_configured": true, 00:12:24.040 "data_offset": 2048, 00:12:24.040 "data_size": 63488 00:12:24.040 } 00:12:24.040 ] 00:12:24.040 }' 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.040 "name": "raid_bdev1", 00:12:24.040 "uuid": "e0a31df1-c017-40e1-9e88-2876fb7c70cb", 00:12:24.040 "strip_size_kb": 0, 00:12:24.040 "state": "online", 00:12:24.040 "raid_level": "raid1", 00:12:24.040 "superblock": true, 00:12:24.040 "num_base_bdevs": 2, 00:12:24.040 "num_base_bdevs_discovered": 2, 00:12:24.040 "num_base_bdevs_operational": 2, 00:12:24.040 "base_bdevs_list": [ 00:12:24.040 { 00:12:24.040 "name": "spare", 00:12:24.040 "uuid": "2c392439-d606-50aa-a97e-87213673d4f1", 00:12:24.040 "is_configured": true, 00:12:24.040 "data_offset": 2048, 00:12:24.040 "data_size": 63488 00:12:24.040 }, 00:12:24.040 { 00:12:24.040 "name": "BaseBdev2", 00:12:24.040 "uuid": "8610193f-b925-5734-951b-f7d7ac27b212", 00:12:24.040 "is_configured": true, 00:12:24.040 "data_offset": 2048, 00:12:24.040 "data_size": 63488 00:12:24.040 } 00:12:24.040 ] 00:12:24.040 }' 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.040 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.610 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:24.610 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.610 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.610 [2024-11-19 12:32:29.677828] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:24.610 [2024-11-19 12:32:29.677870] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:24.610 00:12:24.610 Latency(us) 00:12:24.610 [2024-11-19T12:32:29.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:24.610 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:24.610 raid_bdev1 : 7.64 83.73 251.19 0.00 0.00 16001.26 271.87 109436.53 00:12:24.610 [2024-11-19T12:32:29.871Z] =================================================================================================================== 00:12:24.610 [2024-11-19T12:32:29.871Z] Total : 83.73 251.19 0.00 0.00 16001.26 271.87 109436.53 00:12:24.610 [2024-11-19 12:32:29.721522] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.610 [2024-11-19 12:32:29.721575] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:24.610 [2024-11-19 12:32:29.721657] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:24.610 [2024-11-19 12:32:29.721670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:24.610 { 00:12:24.610 "results": [ 00:12:24.610 { 00:12:24.610 "job": "raid_bdev1", 00:12:24.610 "core_mask": "0x1", 00:12:24.610 "workload": "randrw", 00:12:24.610 "percentage": 50, 00:12:24.610 "status": "finished", 00:12:24.610 "queue_depth": 2, 00:12:24.610 "io_size": 3145728, 00:12:24.610 "runtime": 7.643633, 00:12:24.610 "iops": 83.72981800669918, 00:12:24.610 "mibps": 251.18945402009751, 00:12:24.610 "io_failed": 0, 00:12:24.610 "io_timeout": 0, 00:12:24.610 "avg_latency_us": 16001.261834061135, 00:12:24.610 "min_latency_us": 271.87423580786026, 00:12:24.610 "max_latency_us": 109436.5344978166 00:12:24.610 } 00:12:24.610 ], 00:12:24.610 "core_count": 1 00:12:24.610 } 00:12:24.611 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.611 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.611 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.611 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:24.611 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.611 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.611 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:24.611 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:24.611 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:24.611 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:24.611 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:24.611 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:24.611 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:24.611 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:24.611 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:24.611 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:24.611 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:24.611 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:24.611 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:24.871 /dev/nbd0 00:12:24.871 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:24.871 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:24.871 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:24.871 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:24.871 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:24.871 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:24.871 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:24.871 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:24.871 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:24.871 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:24.871 12:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:24.871 1+0 records in 00:12:24.871 1+0 records out 00:12:24.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286424 s, 14.3 MB/s 00:12:24.871 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.871 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:24.871 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.871 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:24.871 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:24.871 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:24.871 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:24.871 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:24.871 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:24.871 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:24.871 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:24.871 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:24.871 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:24.871 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:24.871 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:24.871 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:24.871 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:24.871 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:24.871 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:25.131 /dev/nbd1 00:12:25.131 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:25.131 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:25.131 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:25.131 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:25.131 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:25.131 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:25.131 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:25.131 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:25.131 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:25.131 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:25.131 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.131 1+0 records in 00:12:25.131 1+0 records out 00:12:25.131 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440613 s, 9.3 MB/s 00:12:25.131 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.131 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:25.131 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.131 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:25.131 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:25.131 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:25.131 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:25.131 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:25.131 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:25.131 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:25.131 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:25.131 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:25.131 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:25.131 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:25.131 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:25.391 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:25.391 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:25.391 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:25.391 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:25.391 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:25.391 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:25.391 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:25.391 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:25.391 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:25.391 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:25.391 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:25.391 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:25.391 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:25.391 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:25.391 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:25.654 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:25.654 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:25.654 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:25.654 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:25.654 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:25.654 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:25.654 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:25.654 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:25.654 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:25.654 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:25.654 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.654 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.654 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.654 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:25.654 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.654 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.654 [2024-11-19 12:32:30.808680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:25.654 [2024-11-19 12:32:30.808778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.654 [2024-11-19 12:32:30.808805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:25.654 [2024-11-19 12:32:30.808816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.654 [2024-11-19 12:32:30.811133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.654 [2024-11-19 12:32:30.811178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:25.654 [2024-11-19 12:32:30.811284] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:25.654 [2024-11-19 12:32:30.811323] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:25.654 [2024-11-19 12:32:30.811432] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:25.654 spare 00:12:25.654 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.654 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:25.654 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.654 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.654 [2024-11-19 12:32:30.911343] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:12:25.654 [2024-11-19 12:32:30.911421] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:25.654 [2024-11-19 12:32:30.911833] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30 00:12:25.654 [2024-11-19 12:32:30.912052] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:12:25.654 [2024-11-19 12:32:30.912064] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:12:25.915 [2024-11-19 12:32:30.912263] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.915 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.915 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:25.915 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.915 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.915 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.915 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.915 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:25.915 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.915 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.915 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.915 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.915 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.915 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.915 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.915 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.915 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.915 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.915 "name": "raid_bdev1", 00:12:25.915 "uuid": "e0a31df1-c017-40e1-9e88-2876fb7c70cb", 00:12:25.915 "strip_size_kb": 0, 00:12:25.915 "state": "online", 00:12:25.915 "raid_level": "raid1", 00:12:25.915 "superblock": true, 00:12:25.915 "num_base_bdevs": 2, 00:12:25.915 "num_base_bdevs_discovered": 2, 00:12:25.915 "num_base_bdevs_operational": 2, 00:12:25.915 "base_bdevs_list": [ 00:12:25.915 { 00:12:25.915 "name": "spare", 00:12:25.915 "uuid": "2c392439-d606-50aa-a97e-87213673d4f1", 00:12:25.915 "is_configured": true, 00:12:25.915 "data_offset": 2048, 00:12:25.915 "data_size": 63488 00:12:25.915 }, 00:12:25.915 { 00:12:25.915 "name": "BaseBdev2", 00:12:25.915 "uuid": "8610193f-b925-5734-951b-f7d7ac27b212", 00:12:25.915 "is_configured": true, 00:12:25.915 "data_offset": 2048, 00:12:25.915 "data_size": 63488 00:12:25.915 } 00:12:25.915 ] 00:12:25.915 }' 00:12:25.915 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.915 12:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.175 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:26.175 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.175 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:26.175 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:26.175 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.175 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.175 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.175 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.175 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.175 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.175 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.175 "name": "raid_bdev1", 00:12:26.175 "uuid": "e0a31df1-c017-40e1-9e88-2876fb7c70cb", 00:12:26.175 "strip_size_kb": 0, 00:12:26.175 "state": "online", 00:12:26.175 "raid_level": "raid1", 00:12:26.175 "superblock": true, 00:12:26.175 "num_base_bdevs": 2, 00:12:26.175 "num_base_bdevs_discovered": 2, 00:12:26.175 "num_base_bdevs_operational": 2, 00:12:26.175 "base_bdevs_list": [ 00:12:26.175 { 00:12:26.175 "name": "spare", 00:12:26.175 "uuid": "2c392439-d606-50aa-a97e-87213673d4f1", 00:12:26.175 "is_configured": true, 00:12:26.175 "data_offset": 2048, 00:12:26.175 "data_size": 63488 00:12:26.175 }, 00:12:26.175 { 00:12:26.175 "name": "BaseBdev2", 00:12:26.175 "uuid": "8610193f-b925-5734-951b-f7d7ac27b212", 00:12:26.175 "is_configured": true, 00:12:26.175 "data_offset": 2048, 00:12:26.175 "data_size": 63488 00:12:26.175 } 00:12:26.175 ] 00:12:26.175 }' 00:12:26.175 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.175 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:26.175 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:26.435 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:26.435 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:26.435 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.435 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.435 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.435 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.435 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:26.435 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:26.435 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.435 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.435 [2024-11-19 12:32:31.499611] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:26.435 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.435 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:26.435 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.435 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.435 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.435 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.435 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:26.435 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.435 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.435 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.435 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.435 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.435 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.435 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.435 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.435 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.435 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.435 "name": "raid_bdev1", 00:12:26.435 "uuid": "e0a31df1-c017-40e1-9e88-2876fb7c70cb", 00:12:26.435 "strip_size_kb": 0, 00:12:26.435 "state": "online", 00:12:26.435 "raid_level": "raid1", 00:12:26.435 "superblock": true, 00:12:26.435 "num_base_bdevs": 2, 00:12:26.435 "num_base_bdevs_discovered": 1, 00:12:26.435 "num_base_bdevs_operational": 1, 00:12:26.435 "base_bdevs_list": [ 00:12:26.435 { 00:12:26.435 "name": null, 00:12:26.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.435 "is_configured": false, 00:12:26.435 "data_offset": 0, 00:12:26.435 "data_size": 63488 00:12:26.435 }, 00:12:26.435 { 00:12:26.435 "name": "BaseBdev2", 00:12:26.435 "uuid": "8610193f-b925-5734-951b-f7d7ac27b212", 00:12:26.435 "is_configured": true, 00:12:26.435 "data_offset": 2048, 00:12:26.435 "data_size": 63488 00:12:26.435 } 00:12:26.435 ] 00:12:26.435 }' 00:12:26.435 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.435 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.006 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:27.006 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.006 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.006 [2024-11-19 12:32:31.970927] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:27.006 [2024-11-19 12:32:31.971235] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:27.006 [2024-11-19 12:32:31.971261] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:27.006 [2024-11-19 12:32:31.971321] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:27.006 [2024-11-19 12:32:31.975923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b000 00:12:27.006 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.006 12:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:27.006 [2024-11-19 12:32:31.977893] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:27.947 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:27.947 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.947 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:27.947 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:27.947 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.947 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.947 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.947 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.947 12:32:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.947 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.947 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.947 "name": "raid_bdev1", 00:12:27.947 "uuid": "e0a31df1-c017-40e1-9e88-2876fb7c70cb", 00:12:27.947 "strip_size_kb": 0, 00:12:27.947 "state": "online", 00:12:27.947 "raid_level": "raid1", 00:12:27.947 "superblock": true, 00:12:27.947 "num_base_bdevs": 2, 00:12:27.947 "num_base_bdevs_discovered": 2, 00:12:27.947 "num_base_bdevs_operational": 2, 00:12:27.947 "process": { 00:12:27.947 "type": "rebuild", 00:12:27.947 "target": "spare", 00:12:27.947 "progress": { 00:12:27.947 "blocks": 20480, 00:12:27.947 "percent": 32 00:12:27.947 } 00:12:27.947 }, 00:12:27.947 "base_bdevs_list": [ 00:12:27.947 { 00:12:27.947 "name": "spare", 00:12:27.947 "uuid": "2c392439-d606-50aa-a97e-87213673d4f1", 00:12:27.947 "is_configured": true, 00:12:27.947 "data_offset": 2048, 00:12:27.947 "data_size": 63488 00:12:27.947 }, 00:12:27.947 { 00:12:27.947 "name": "BaseBdev2", 00:12:27.947 "uuid": "8610193f-b925-5734-951b-f7d7ac27b212", 00:12:27.947 "is_configured": true, 00:12:27.947 "data_offset": 2048, 00:12:27.947 "data_size": 63488 00:12:27.947 } 00:12:27.947 ] 00:12:27.947 }' 00:12:27.947 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.947 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:27.947 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.947 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:27.947 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:27.947 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.947 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.947 [2024-11-19 12:32:33.147323] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:27.947 [2024-11-19 12:32:33.183138] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:27.947 [2024-11-19 12:32:33.183248] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.947 [2024-11-19 12:32:33.183281] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:27.947 [2024-11-19 12:32:33.183303] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:27.947 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.947 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:27.947 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.947 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.947 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.947 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.947 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:27.947 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.947 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.947 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.947 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.947 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.947 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.947 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.947 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.207 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.207 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.207 "name": "raid_bdev1", 00:12:28.207 "uuid": "e0a31df1-c017-40e1-9e88-2876fb7c70cb", 00:12:28.207 "strip_size_kb": 0, 00:12:28.207 "state": "online", 00:12:28.207 "raid_level": "raid1", 00:12:28.207 "superblock": true, 00:12:28.207 "num_base_bdevs": 2, 00:12:28.207 "num_base_bdevs_discovered": 1, 00:12:28.207 "num_base_bdevs_operational": 1, 00:12:28.207 "base_bdevs_list": [ 00:12:28.207 { 00:12:28.207 "name": null, 00:12:28.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.207 "is_configured": false, 00:12:28.207 "data_offset": 0, 00:12:28.207 "data_size": 63488 00:12:28.207 }, 00:12:28.207 { 00:12:28.207 "name": "BaseBdev2", 00:12:28.207 "uuid": "8610193f-b925-5734-951b-f7d7ac27b212", 00:12:28.207 "is_configured": true, 00:12:28.207 "data_offset": 2048, 00:12:28.207 "data_size": 63488 00:12:28.207 } 00:12:28.207 ] 00:12:28.207 }' 00:12:28.207 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.207 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.467 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:28.467 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.467 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.467 [2024-11-19 12:32:33.647128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:28.467 [2024-11-19 12:32:33.647218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.467 [2024-11-19 12:32:33.647244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:28.467 [2024-11-19 12:32:33.647258] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.467 [2024-11-19 12:32:33.647715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.467 [2024-11-19 12:32:33.647737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:28.467 [2024-11-19 12:32:33.647857] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:28.467 [2024-11-19 12:32:33.647874] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:28.467 [2024-11-19 12:32:33.647886] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:28.467 [2024-11-19 12:32:33.647921] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:28.467 spare 00:12:28.467 [2024-11-19 12:32:33.652428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:12:28.467 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.467 12:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:28.467 [2024-11-19 12:32:33.654315] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:29.419 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:29.419 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.419 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:29.419 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:29.419 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.419 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.419 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.419 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.419 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.679 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.679 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.679 "name": "raid_bdev1", 00:12:29.679 "uuid": "e0a31df1-c017-40e1-9e88-2876fb7c70cb", 00:12:29.679 "strip_size_kb": 0, 00:12:29.679 "state": "online", 00:12:29.679 "raid_level": "raid1", 00:12:29.679 "superblock": true, 00:12:29.679 "num_base_bdevs": 2, 00:12:29.679 "num_base_bdevs_discovered": 2, 00:12:29.679 "num_base_bdevs_operational": 2, 00:12:29.679 "process": { 00:12:29.679 "type": "rebuild", 00:12:29.679 "target": "spare", 00:12:29.679 "progress": { 00:12:29.679 "blocks": 20480, 00:12:29.679 "percent": 32 00:12:29.679 } 00:12:29.679 }, 00:12:29.679 "base_bdevs_list": [ 00:12:29.679 { 00:12:29.679 "name": "spare", 00:12:29.679 "uuid": "2c392439-d606-50aa-a97e-87213673d4f1", 00:12:29.679 "is_configured": true, 00:12:29.679 "data_offset": 2048, 00:12:29.679 "data_size": 63488 00:12:29.679 }, 00:12:29.679 { 00:12:29.679 "name": "BaseBdev2", 00:12:29.679 "uuid": "8610193f-b925-5734-951b-f7d7ac27b212", 00:12:29.679 "is_configured": true, 00:12:29.679 "data_offset": 2048, 00:12:29.679 "data_size": 63488 00:12:29.679 } 00:12:29.679 ] 00:12:29.679 }' 00:12:29.679 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.679 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:29.679 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.679 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:29.679 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:29.679 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.679 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.679 [2024-11-19 12:32:34.815120] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:29.679 [2024-11-19 12:32:34.859516] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:29.679 [2024-11-19 12:32:34.859633] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.679 [2024-11-19 12:32:34.859674] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:29.679 [2024-11-19 12:32:34.859684] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:29.679 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.679 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:29.679 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.679 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.679 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.679 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.679 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:29.679 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.679 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.679 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.679 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.679 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.679 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.679 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.679 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.679 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.679 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.679 "name": "raid_bdev1", 00:12:29.680 "uuid": "e0a31df1-c017-40e1-9e88-2876fb7c70cb", 00:12:29.680 "strip_size_kb": 0, 00:12:29.680 "state": "online", 00:12:29.680 "raid_level": "raid1", 00:12:29.680 "superblock": true, 00:12:29.680 "num_base_bdevs": 2, 00:12:29.680 "num_base_bdevs_discovered": 1, 00:12:29.680 "num_base_bdevs_operational": 1, 00:12:29.680 "base_bdevs_list": [ 00:12:29.680 { 00:12:29.680 "name": null, 00:12:29.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.680 "is_configured": false, 00:12:29.680 "data_offset": 0, 00:12:29.680 "data_size": 63488 00:12:29.680 }, 00:12:29.680 { 00:12:29.680 "name": "BaseBdev2", 00:12:29.680 "uuid": "8610193f-b925-5734-951b-f7d7ac27b212", 00:12:29.680 "is_configured": true, 00:12:29.680 "data_offset": 2048, 00:12:29.680 "data_size": 63488 00:12:29.680 } 00:12:29.680 ] 00:12:29.680 }' 00:12:29.680 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.680 12:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.250 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:30.250 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.250 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:30.250 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:30.250 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.250 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.250 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.250 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.250 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.250 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.250 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.250 "name": "raid_bdev1", 00:12:30.250 "uuid": "e0a31df1-c017-40e1-9e88-2876fb7c70cb", 00:12:30.250 "strip_size_kb": 0, 00:12:30.250 "state": "online", 00:12:30.250 "raid_level": "raid1", 00:12:30.250 "superblock": true, 00:12:30.250 "num_base_bdevs": 2, 00:12:30.250 "num_base_bdevs_discovered": 1, 00:12:30.250 "num_base_bdevs_operational": 1, 00:12:30.250 "base_bdevs_list": [ 00:12:30.250 { 00:12:30.250 "name": null, 00:12:30.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.250 "is_configured": false, 00:12:30.250 "data_offset": 0, 00:12:30.250 "data_size": 63488 00:12:30.250 }, 00:12:30.250 { 00:12:30.250 "name": "BaseBdev2", 00:12:30.250 "uuid": "8610193f-b925-5734-951b-f7d7ac27b212", 00:12:30.250 "is_configured": true, 00:12:30.250 "data_offset": 2048, 00:12:30.250 "data_size": 63488 00:12:30.250 } 00:12:30.250 ] 00:12:30.250 }' 00:12:30.250 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.250 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:30.250 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.250 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:30.250 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:30.250 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.250 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.250 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.250 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:30.250 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.251 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.251 [2024-11-19 12:32:35.463508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:30.251 [2024-11-19 12:32:35.463588] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.251 [2024-11-19 12:32:35.463613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:30.251 [2024-11-19 12:32:35.463623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.251 [2024-11-19 12:32:35.464073] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.251 [2024-11-19 12:32:35.464106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:30.251 [2024-11-19 12:32:35.464196] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:30.251 [2024-11-19 12:32:35.464224] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:30.251 [2024-11-19 12:32:35.464233] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:30.251 [2024-11-19 12:32:35.464244] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:30.251 BaseBdev1 00:12:30.251 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.251 12:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:31.634 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:31.634 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.634 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.634 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.634 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.634 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:31.634 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.634 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.634 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.634 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.634 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.634 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.634 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.634 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.634 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.634 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.634 "name": "raid_bdev1", 00:12:31.634 "uuid": "e0a31df1-c017-40e1-9e88-2876fb7c70cb", 00:12:31.634 "strip_size_kb": 0, 00:12:31.634 "state": "online", 00:12:31.634 "raid_level": "raid1", 00:12:31.634 "superblock": true, 00:12:31.634 "num_base_bdevs": 2, 00:12:31.634 "num_base_bdevs_discovered": 1, 00:12:31.634 "num_base_bdevs_operational": 1, 00:12:31.634 "base_bdevs_list": [ 00:12:31.634 { 00:12:31.634 "name": null, 00:12:31.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.634 "is_configured": false, 00:12:31.634 "data_offset": 0, 00:12:31.634 "data_size": 63488 00:12:31.634 }, 00:12:31.634 { 00:12:31.634 "name": "BaseBdev2", 00:12:31.634 "uuid": "8610193f-b925-5734-951b-f7d7ac27b212", 00:12:31.634 "is_configured": true, 00:12:31.634 "data_offset": 2048, 00:12:31.634 "data_size": 63488 00:12:31.634 } 00:12:31.634 ] 00:12:31.634 }' 00:12:31.634 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.634 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.895 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:31.895 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.895 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:31.895 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:31.895 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.895 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.895 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.895 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.895 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.895 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.895 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.895 "name": "raid_bdev1", 00:12:31.895 "uuid": "e0a31df1-c017-40e1-9e88-2876fb7c70cb", 00:12:31.895 "strip_size_kb": 0, 00:12:31.895 "state": "online", 00:12:31.895 "raid_level": "raid1", 00:12:31.895 "superblock": true, 00:12:31.895 "num_base_bdevs": 2, 00:12:31.895 "num_base_bdevs_discovered": 1, 00:12:31.895 "num_base_bdevs_operational": 1, 00:12:31.895 "base_bdevs_list": [ 00:12:31.895 { 00:12:31.895 "name": null, 00:12:31.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.895 "is_configured": false, 00:12:31.895 "data_offset": 0, 00:12:31.895 "data_size": 63488 00:12:31.895 }, 00:12:31.895 { 00:12:31.895 "name": "BaseBdev2", 00:12:31.895 "uuid": "8610193f-b925-5734-951b-f7d7ac27b212", 00:12:31.895 "is_configured": true, 00:12:31.895 "data_offset": 2048, 00:12:31.895 "data_size": 63488 00:12:31.895 } 00:12:31.895 ] 00:12:31.895 }' 00:12:31.895 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.895 12:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:31.895 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.895 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:31.895 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:31.895 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:12:31.895 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:31.895 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:31.895 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:31.895 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:31.895 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:31.895 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:31.895 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.895 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.895 [2024-11-19 12:32:37.064929] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:31.895 [2024-11-19 12:32:37.065146] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:31.895 [2024-11-19 12:32:37.065202] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:31.895 request: 00:12:31.895 { 00:12:31.895 "base_bdev": "BaseBdev1", 00:12:31.895 "raid_bdev": "raid_bdev1", 00:12:31.895 "method": "bdev_raid_add_base_bdev", 00:12:31.895 "req_id": 1 00:12:31.895 } 00:12:31.895 Got JSON-RPC error response 00:12:31.895 response: 00:12:31.895 { 00:12:31.895 "code": -22, 00:12:31.895 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:31.895 } 00:12:31.895 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:31.895 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:12:31.895 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:31.895 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:31.895 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:31.895 12:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:32.835 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:32.835 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.835 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.835 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.835 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.835 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:32.835 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.835 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.835 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.835 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.835 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.835 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.835 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.835 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.095 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.095 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.095 "name": "raid_bdev1", 00:12:33.095 "uuid": "e0a31df1-c017-40e1-9e88-2876fb7c70cb", 00:12:33.095 "strip_size_kb": 0, 00:12:33.095 "state": "online", 00:12:33.095 "raid_level": "raid1", 00:12:33.095 "superblock": true, 00:12:33.095 "num_base_bdevs": 2, 00:12:33.095 "num_base_bdevs_discovered": 1, 00:12:33.095 "num_base_bdevs_operational": 1, 00:12:33.095 "base_bdevs_list": [ 00:12:33.095 { 00:12:33.095 "name": null, 00:12:33.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.095 "is_configured": false, 00:12:33.095 "data_offset": 0, 00:12:33.095 "data_size": 63488 00:12:33.095 }, 00:12:33.095 { 00:12:33.095 "name": "BaseBdev2", 00:12:33.095 "uuid": "8610193f-b925-5734-951b-f7d7ac27b212", 00:12:33.095 "is_configured": true, 00:12:33.095 "data_offset": 2048, 00:12:33.095 "data_size": 63488 00:12:33.095 } 00:12:33.095 ] 00:12:33.095 }' 00:12:33.095 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.095 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.355 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:33.355 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.355 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:33.355 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:33.355 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.355 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.355 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.355 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.355 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.355 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.355 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.355 "name": "raid_bdev1", 00:12:33.355 "uuid": "e0a31df1-c017-40e1-9e88-2876fb7c70cb", 00:12:33.355 "strip_size_kb": 0, 00:12:33.355 "state": "online", 00:12:33.355 "raid_level": "raid1", 00:12:33.355 "superblock": true, 00:12:33.355 "num_base_bdevs": 2, 00:12:33.355 "num_base_bdevs_discovered": 1, 00:12:33.355 "num_base_bdevs_operational": 1, 00:12:33.355 "base_bdevs_list": [ 00:12:33.355 { 00:12:33.355 "name": null, 00:12:33.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.355 "is_configured": false, 00:12:33.355 "data_offset": 0, 00:12:33.355 "data_size": 63488 00:12:33.355 }, 00:12:33.355 { 00:12:33.355 "name": "BaseBdev2", 00:12:33.355 "uuid": "8610193f-b925-5734-951b-f7d7ac27b212", 00:12:33.355 "is_configured": true, 00:12:33.355 "data_offset": 2048, 00:12:33.355 "data_size": 63488 00:12:33.355 } 00:12:33.355 ] 00:12:33.355 }' 00:12:33.616 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:33.616 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:33.616 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.616 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:33.616 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 87711 00:12:33.616 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 87711 ']' 00:12:33.616 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 87711 00:12:33.616 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:12:33.616 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:33.616 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87711 00:12:33.616 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:33.616 killing process with pid 87711 00:12:33.616 Received shutdown signal, test time was about 16.694014 seconds 00:12:33.616 00:12:33.616 Latency(us) 00:12:33.616 [2024-11-19T12:32:38.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:33.616 [2024-11-19T12:32:38.877Z] =================================================================================================================== 00:12:33.616 [2024-11-19T12:32:38.877Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:33.616 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:33.616 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87711' 00:12:33.616 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 87711 00:12:33.616 [2024-11-19 12:32:38.750602] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:33.616 [2024-11-19 12:32:38.750803] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:33.616 12:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 87711 00:12:33.616 [2024-11-19 12:32:38.750885] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:33.616 [2024-11-19 12:32:38.750903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:12:33.616 [2024-11-19 12:32:38.778346] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:33.876 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:33.876 00:12:33.876 real 0m18.651s 00:12:33.876 user 0m24.932s 00:12:33.876 sys 0m2.156s 00:12:33.876 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:33.876 ************************************ 00:12:33.876 END TEST raid_rebuild_test_sb_io 00:12:33.876 ************************************ 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.877 12:32:39 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:33.877 12:32:39 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:12:33.877 12:32:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:33.877 12:32:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:33.877 12:32:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:33.877 ************************************ 00:12:33.877 START TEST raid_rebuild_test 00:12:33.877 ************************************ 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=88383 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 88383 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 88383 ']' 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:33.877 12:32:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.139 [2024-11-19 12:32:39.208655] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:34.139 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:34.139 Zero copy mechanism will not be used. 00:12:34.139 [2024-11-19 12:32:39.208936] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88383 ] 00:12:34.139 [2024-11-19 12:32:39.372093] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.398 [2024-11-19 12:32:39.419898] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.398 [2024-11-19 12:32:39.462516] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:34.398 [2024-11-19 12:32:39.462553] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:34.968 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:34.968 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:12:34.968 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:34.968 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:34.968 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.968 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.968 BaseBdev1_malloc 00:12:34.968 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.968 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:34.968 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.968 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.968 [2024-11-19 12:32:40.057353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:34.968 [2024-11-19 12:32:40.057415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.968 [2024-11-19 12:32:40.057444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:34.968 [2024-11-19 12:32:40.057460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.968 [2024-11-19 12:32:40.059665] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.968 [2024-11-19 12:32:40.059707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:34.968 BaseBdev1 00:12:34.968 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.968 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:34.968 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.969 BaseBdev2_malloc 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.969 [2024-11-19 12:32:40.095600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:34.969 [2024-11-19 12:32:40.095751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.969 [2024-11-19 12:32:40.095778] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:34.969 [2024-11-19 12:32:40.095788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.969 [2024-11-19 12:32:40.097886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.969 [2024-11-19 12:32:40.097924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:34.969 BaseBdev2 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.969 BaseBdev3_malloc 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.969 [2024-11-19 12:32:40.124185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:34.969 [2024-11-19 12:32:40.124246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.969 [2024-11-19 12:32:40.124271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:34.969 [2024-11-19 12:32:40.124280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.969 [2024-11-19 12:32:40.126320] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.969 [2024-11-19 12:32:40.126450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:34.969 BaseBdev3 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.969 BaseBdev4_malloc 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.969 [2024-11-19 12:32:40.153041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:34.969 [2024-11-19 12:32:40.153117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.969 [2024-11-19 12:32:40.153148] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:34.969 [2024-11-19 12:32:40.153157] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.969 [2024-11-19 12:32:40.155278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.969 [2024-11-19 12:32:40.155317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:34.969 BaseBdev4 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.969 spare_malloc 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.969 spare_delay 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.969 [2024-11-19 12:32:40.194061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:34.969 [2024-11-19 12:32:40.194139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.969 [2024-11-19 12:32:40.194165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:34.969 [2024-11-19 12:32:40.194175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.969 [2024-11-19 12:32:40.196411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.969 [2024-11-19 12:32:40.196536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:34.969 spare 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.969 [2024-11-19 12:32:40.206138] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:34.969 [2024-11-19 12:32:40.208060] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:34.969 [2024-11-19 12:32:40.208127] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:34.969 [2024-11-19 12:32:40.208167] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:34.969 [2024-11-19 12:32:40.208251] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:34.969 [2024-11-19 12:32:40.208260] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:34.969 [2024-11-19 12:32:40.208540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:34.969 [2024-11-19 12:32:40.208690] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:34.969 [2024-11-19 12:32:40.208708] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:34.969 [2024-11-19 12:32:40.208867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.969 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.230 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.230 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.230 "name": "raid_bdev1", 00:12:35.230 "uuid": "d8bea2e1-42e0-49e7-a0b6-1198ba326549", 00:12:35.230 "strip_size_kb": 0, 00:12:35.230 "state": "online", 00:12:35.230 "raid_level": "raid1", 00:12:35.230 "superblock": false, 00:12:35.230 "num_base_bdevs": 4, 00:12:35.230 "num_base_bdevs_discovered": 4, 00:12:35.230 "num_base_bdevs_operational": 4, 00:12:35.230 "base_bdevs_list": [ 00:12:35.230 { 00:12:35.230 "name": "BaseBdev1", 00:12:35.230 "uuid": "9513f459-4a58-583f-a5ba-734739837cd0", 00:12:35.230 "is_configured": true, 00:12:35.230 "data_offset": 0, 00:12:35.230 "data_size": 65536 00:12:35.230 }, 00:12:35.230 { 00:12:35.230 "name": "BaseBdev2", 00:12:35.230 "uuid": "5187801f-da86-53e1-8a66-a00ee596dc9b", 00:12:35.230 "is_configured": true, 00:12:35.230 "data_offset": 0, 00:12:35.230 "data_size": 65536 00:12:35.230 }, 00:12:35.230 { 00:12:35.230 "name": "BaseBdev3", 00:12:35.230 "uuid": "c69547a3-9142-555d-8c15-18b63e840c2a", 00:12:35.230 "is_configured": true, 00:12:35.230 "data_offset": 0, 00:12:35.230 "data_size": 65536 00:12:35.230 }, 00:12:35.230 { 00:12:35.230 "name": "BaseBdev4", 00:12:35.230 "uuid": "87ddd7a6-f253-5957-b4f9-dab8d0073523", 00:12:35.230 "is_configured": true, 00:12:35.230 "data_offset": 0, 00:12:35.230 "data_size": 65536 00:12:35.230 } 00:12:35.230 ] 00:12:35.230 }' 00:12:35.230 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.230 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.490 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:35.490 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.490 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.490 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:35.490 [2024-11-19 12:32:40.673602] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:35.490 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.490 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:35.490 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:35.490 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.490 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.490 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.490 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.490 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:35.490 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:35.490 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:35.490 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:35.490 12:32:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:35.490 12:32:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:35.490 12:32:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:35.490 12:32:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:35.490 12:32:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:35.490 12:32:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:35.490 12:32:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:35.490 12:32:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:35.490 12:32:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:35.490 12:32:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:35.750 [2024-11-19 12:32:40.929003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:35.750 /dev/nbd0 00:12:35.750 12:32:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:35.750 12:32:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:35.750 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:35.750 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:35.750 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:35.750 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:35.750 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:35.750 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:35.750 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:35.750 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:35.750 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:35.750 1+0 records in 00:12:35.750 1+0 records out 00:12:35.750 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557458 s, 7.3 MB/s 00:12:35.750 12:32:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.750 12:32:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:35.750 12:32:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.008 12:32:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:36.008 12:32:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:36.008 12:32:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:36.008 12:32:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:36.008 12:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:36.008 12:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:36.008 12:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:42.586 65536+0 records in 00:12:42.586 65536+0 records out 00:12:42.586 33554432 bytes (34 MB, 32 MiB) copied, 5.94642 s, 5.6 MB/s 00:12:42.586 12:32:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:42.586 12:32:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:42.586 12:32:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:42.586 12:32:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:42.586 12:32:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:42.586 12:32:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:42.586 12:32:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:42.586 [2024-11-19 12:32:47.173897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.586 [2024-11-19 12:32:47.190941] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.586 "name": "raid_bdev1", 00:12:42.586 "uuid": "d8bea2e1-42e0-49e7-a0b6-1198ba326549", 00:12:42.586 "strip_size_kb": 0, 00:12:42.586 "state": "online", 00:12:42.586 "raid_level": "raid1", 00:12:42.586 "superblock": false, 00:12:42.586 "num_base_bdevs": 4, 00:12:42.586 "num_base_bdevs_discovered": 3, 00:12:42.586 "num_base_bdevs_operational": 3, 00:12:42.586 "base_bdevs_list": [ 00:12:42.586 { 00:12:42.586 "name": null, 00:12:42.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.586 "is_configured": false, 00:12:42.586 "data_offset": 0, 00:12:42.586 "data_size": 65536 00:12:42.586 }, 00:12:42.586 { 00:12:42.586 "name": "BaseBdev2", 00:12:42.586 "uuid": "5187801f-da86-53e1-8a66-a00ee596dc9b", 00:12:42.586 "is_configured": true, 00:12:42.586 "data_offset": 0, 00:12:42.586 "data_size": 65536 00:12:42.586 }, 00:12:42.586 { 00:12:42.586 "name": "BaseBdev3", 00:12:42.586 "uuid": "c69547a3-9142-555d-8c15-18b63e840c2a", 00:12:42.586 "is_configured": true, 00:12:42.586 "data_offset": 0, 00:12:42.586 "data_size": 65536 00:12:42.586 }, 00:12:42.586 { 00:12:42.586 "name": "BaseBdev4", 00:12:42.586 "uuid": "87ddd7a6-f253-5957-b4f9-dab8d0073523", 00:12:42.586 "is_configured": true, 00:12:42.586 "data_offset": 0, 00:12:42.586 "data_size": 65536 00:12:42.586 } 00:12:42.586 ] 00:12:42.586 }' 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.586 [2024-11-19 12:32:47.626384] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:42.586 [2024-11-19 12:32:47.629781] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:42.586 [2024-11-19 12:32:47.631656] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.586 12:32:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:43.527 12:32:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:43.527 12:32:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.527 12:32:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:43.527 12:32:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:43.527 12:32:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.527 12:32:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.527 12:32:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.527 12:32:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.527 12:32:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.527 12:32:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.527 12:32:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.527 "name": "raid_bdev1", 00:12:43.527 "uuid": "d8bea2e1-42e0-49e7-a0b6-1198ba326549", 00:12:43.527 "strip_size_kb": 0, 00:12:43.527 "state": "online", 00:12:43.527 "raid_level": "raid1", 00:12:43.527 "superblock": false, 00:12:43.527 "num_base_bdevs": 4, 00:12:43.527 "num_base_bdevs_discovered": 4, 00:12:43.527 "num_base_bdevs_operational": 4, 00:12:43.527 "process": { 00:12:43.527 "type": "rebuild", 00:12:43.527 "target": "spare", 00:12:43.527 "progress": { 00:12:43.527 "blocks": 20480, 00:12:43.527 "percent": 31 00:12:43.527 } 00:12:43.527 }, 00:12:43.527 "base_bdevs_list": [ 00:12:43.527 { 00:12:43.527 "name": "spare", 00:12:43.527 "uuid": "2297af1a-dc7f-5066-bfa8-82bda3967c8b", 00:12:43.527 "is_configured": true, 00:12:43.527 "data_offset": 0, 00:12:43.527 "data_size": 65536 00:12:43.527 }, 00:12:43.527 { 00:12:43.527 "name": "BaseBdev2", 00:12:43.527 "uuid": "5187801f-da86-53e1-8a66-a00ee596dc9b", 00:12:43.527 "is_configured": true, 00:12:43.527 "data_offset": 0, 00:12:43.527 "data_size": 65536 00:12:43.527 }, 00:12:43.527 { 00:12:43.527 "name": "BaseBdev3", 00:12:43.527 "uuid": "c69547a3-9142-555d-8c15-18b63e840c2a", 00:12:43.527 "is_configured": true, 00:12:43.527 "data_offset": 0, 00:12:43.527 "data_size": 65536 00:12:43.527 }, 00:12:43.527 { 00:12:43.527 "name": "BaseBdev4", 00:12:43.527 "uuid": "87ddd7a6-f253-5957-b4f9-dab8d0073523", 00:12:43.527 "is_configured": true, 00:12:43.527 "data_offset": 0, 00:12:43.527 "data_size": 65536 00:12:43.527 } 00:12:43.527 ] 00:12:43.527 }' 00:12:43.527 12:32:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.527 12:32:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:43.527 12:32:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.527 12:32:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:43.527 12:32:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:43.527 12:32:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.527 12:32:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.527 [2024-11-19 12:32:48.770882] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:43.787 [2024-11-19 12:32:48.837448] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:43.787 [2024-11-19 12:32:48.837558] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.787 [2024-11-19 12:32:48.837579] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:43.787 [2024-11-19 12:32:48.837588] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:43.787 12:32:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.787 12:32:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:43.787 12:32:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.787 12:32:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.787 12:32:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.787 12:32:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.787 12:32:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:43.787 12:32:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.787 12:32:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.787 12:32:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.787 12:32:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.787 12:32:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.787 12:32:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.787 12:32:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.787 12:32:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.787 12:32:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.787 12:32:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.787 "name": "raid_bdev1", 00:12:43.787 "uuid": "d8bea2e1-42e0-49e7-a0b6-1198ba326549", 00:12:43.787 "strip_size_kb": 0, 00:12:43.787 "state": "online", 00:12:43.787 "raid_level": "raid1", 00:12:43.787 "superblock": false, 00:12:43.787 "num_base_bdevs": 4, 00:12:43.787 "num_base_bdevs_discovered": 3, 00:12:43.787 "num_base_bdevs_operational": 3, 00:12:43.787 "base_bdevs_list": [ 00:12:43.787 { 00:12:43.787 "name": null, 00:12:43.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.787 "is_configured": false, 00:12:43.787 "data_offset": 0, 00:12:43.787 "data_size": 65536 00:12:43.787 }, 00:12:43.787 { 00:12:43.787 "name": "BaseBdev2", 00:12:43.787 "uuid": "5187801f-da86-53e1-8a66-a00ee596dc9b", 00:12:43.787 "is_configured": true, 00:12:43.787 "data_offset": 0, 00:12:43.787 "data_size": 65536 00:12:43.787 }, 00:12:43.787 { 00:12:43.787 "name": "BaseBdev3", 00:12:43.787 "uuid": "c69547a3-9142-555d-8c15-18b63e840c2a", 00:12:43.787 "is_configured": true, 00:12:43.787 "data_offset": 0, 00:12:43.787 "data_size": 65536 00:12:43.787 }, 00:12:43.787 { 00:12:43.787 "name": "BaseBdev4", 00:12:43.788 "uuid": "87ddd7a6-f253-5957-b4f9-dab8d0073523", 00:12:43.788 "is_configured": true, 00:12:43.788 "data_offset": 0, 00:12:43.788 "data_size": 65536 00:12:43.788 } 00:12:43.788 ] 00:12:43.788 }' 00:12:43.788 12:32:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.788 12:32:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.046 12:32:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:44.046 12:32:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.046 12:32:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:44.046 12:32:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:44.046 12:32:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.046 12:32:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.046 12:32:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.046 12:32:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.046 12:32:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.046 12:32:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.046 12:32:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.046 "name": "raid_bdev1", 00:12:44.046 "uuid": "d8bea2e1-42e0-49e7-a0b6-1198ba326549", 00:12:44.046 "strip_size_kb": 0, 00:12:44.046 "state": "online", 00:12:44.046 "raid_level": "raid1", 00:12:44.046 "superblock": false, 00:12:44.046 "num_base_bdevs": 4, 00:12:44.046 "num_base_bdevs_discovered": 3, 00:12:44.046 "num_base_bdevs_operational": 3, 00:12:44.046 "base_bdevs_list": [ 00:12:44.046 { 00:12:44.046 "name": null, 00:12:44.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.046 "is_configured": false, 00:12:44.046 "data_offset": 0, 00:12:44.046 "data_size": 65536 00:12:44.046 }, 00:12:44.046 { 00:12:44.046 "name": "BaseBdev2", 00:12:44.046 "uuid": "5187801f-da86-53e1-8a66-a00ee596dc9b", 00:12:44.046 "is_configured": true, 00:12:44.046 "data_offset": 0, 00:12:44.046 "data_size": 65536 00:12:44.046 }, 00:12:44.046 { 00:12:44.046 "name": "BaseBdev3", 00:12:44.046 "uuid": "c69547a3-9142-555d-8c15-18b63e840c2a", 00:12:44.046 "is_configured": true, 00:12:44.046 "data_offset": 0, 00:12:44.046 "data_size": 65536 00:12:44.046 }, 00:12:44.046 { 00:12:44.046 "name": "BaseBdev4", 00:12:44.046 "uuid": "87ddd7a6-f253-5957-b4f9-dab8d0073523", 00:12:44.046 "is_configured": true, 00:12:44.046 "data_offset": 0, 00:12:44.046 "data_size": 65536 00:12:44.046 } 00:12:44.046 ] 00:12:44.046 }' 00:12:44.046 12:32:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.046 12:32:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:44.046 12:32:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:44.306 12:32:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:44.306 12:32:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:44.306 12:32:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.306 12:32:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.306 [2024-11-19 12:32:49.357015] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:44.306 [2024-11-19 12:32:49.360461] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:44.306 [2024-11-19 12:32:49.362500] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:44.306 12:32:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.306 12:32:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:45.242 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:45.242 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.242 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:45.242 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:45.242 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.242 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.242 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.242 12:32:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.242 12:32:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.242 12:32:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.243 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.243 "name": "raid_bdev1", 00:12:45.243 "uuid": "d8bea2e1-42e0-49e7-a0b6-1198ba326549", 00:12:45.243 "strip_size_kb": 0, 00:12:45.243 "state": "online", 00:12:45.243 "raid_level": "raid1", 00:12:45.243 "superblock": false, 00:12:45.243 "num_base_bdevs": 4, 00:12:45.243 "num_base_bdevs_discovered": 4, 00:12:45.243 "num_base_bdevs_operational": 4, 00:12:45.243 "process": { 00:12:45.243 "type": "rebuild", 00:12:45.243 "target": "spare", 00:12:45.243 "progress": { 00:12:45.243 "blocks": 20480, 00:12:45.243 "percent": 31 00:12:45.243 } 00:12:45.243 }, 00:12:45.243 "base_bdevs_list": [ 00:12:45.243 { 00:12:45.243 "name": "spare", 00:12:45.243 "uuid": "2297af1a-dc7f-5066-bfa8-82bda3967c8b", 00:12:45.243 "is_configured": true, 00:12:45.243 "data_offset": 0, 00:12:45.243 "data_size": 65536 00:12:45.243 }, 00:12:45.243 { 00:12:45.243 "name": "BaseBdev2", 00:12:45.243 "uuid": "5187801f-da86-53e1-8a66-a00ee596dc9b", 00:12:45.243 "is_configured": true, 00:12:45.243 "data_offset": 0, 00:12:45.243 "data_size": 65536 00:12:45.243 }, 00:12:45.243 { 00:12:45.243 "name": "BaseBdev3", 00:12:45.243 "uuid": "c69547a3-9142-555d-8c15-18b63e840c2a", 00:12:45.243 "is_configured": true, 00:12:45.243 "data_offset": 0, 00:12:45.243 "data_size": 65536 00:12:45.243 }, 00:12:45.243 { 00:12:45.243 "name": "BaseBdev4", 00:12:45.243 "uuid": "87ddd7a6-f253-5957-b4f9-dab8d0073523", 00:12:45.243 "is_configured": true, 00:12:45.243 "data_offset": 0, 00:12:45.243 "data_size": 65536 00:12:45.243 } 00:12:45.243 ] 00:12:45.243 }' 00:12:45.243 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.243 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:45.243 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.502 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:45.502 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:45.502 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:45.502 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:45.502 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:45.502 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:45.502 12:32:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.502 12:32:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.502 [2024-11-19 12:32:50.532996] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:45.502 [2024-11-19 12:32:50.567587] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09ca0 00:12:45.502 12:32:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.502 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:45.502 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:45.502 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:45.503 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.503 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:45.503 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:45.503 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.503 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.503 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.503 12:32:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.503 12:32:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.503 12:32:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.503 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.503 "name": "raid_bdev1", 00:12:45.503 "uuid": "d8bea2e1-42e0-49e7-a0b6-1198ba326549", 00:12:45.503 "strip_size_kb": 0, 00:12:45.503 "state": "online", 00:12:45.503 "raid_level": "raid1", 00:12:45.503 "superblock": false, 00:12:45.503 "num_base_bdevs": 4, 00:12:45.503 "num_base_bdevs_discovered": 3, 00:12:45.503 "num_base_bdevs_operational": 3, 00:12:45.503 "process": { 00:12:45.503 "type": "rebuild", 00:12:45.503 "target": "spare", 00:12:45.503 "progress": { 00:12:45.503 "blocks": 24576, 00:12:45.503 "percent": 37 00:12:45.503 } 00:12:45.503 }, 00:12:45.503 "base_bdevs_list": [ 00:12:45.503 { 00:12:45.503 "name": "spare", 00:12:45.503 "uuid": "2297af1a-dc7f-5066-bfa8-82bda3967c8b", 00:12:45.503 "is_configured": true, 00:12:45.503 "data_offset": 0, 00:12:45.503 "data_size": 65536 00:12:45.503 }, 00:12:45.503 { 00:12:45.503 "name": null, 00:12:45.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.503 "is_configured": false, 00:12:45.503 "data_offset": 0, 00:12:45.503 "data_size": 65536 00:12:45.503 }, 00:12:45.503 { 00:12:45.503 "name": "BaseBdev3", 00:12:45.503 "uuid": "c69547a3-9142-555d-8c15-18b63e840c2a", 00:12:45.503 "is_configured": true, 00:12:45.503 "data_offset": 0, 00:12:45.503 "data_size": 65536 00:12:45.503 }, 00:12:45.503 { 00:12:45.503 "name": "BaseBdev4", 00:12:45.503 "uuid": "87ddd7a6-f253-5957-b4f9-dab8d0073523", 00:12:45.503 "is_configured": true, 00:12:45.503 "data_offset": 0, 00:12:45.503 "data_size": 65536 00:12:45.503 } 00:12:45.503 ] 00:12:45.503 }' 00:12:45.503 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.503 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:45.503 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.503 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:45.503 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=363 00:12:45.503 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:45.503 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:45.503 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.503 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:45.503 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:45.503 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.503 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.503 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.503 12:32:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.503 12:32:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.503 12:32:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.763 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.763 "name": "raid_bdev1", 00:12:45.763 "uuid": "d8bea2e1-42e0-49e7-a0b6-1198ba326549", 00:12:45.763 "strip_size_kb": 0, 00:12:45.763 "state": "online", 00:12:45.763 "raid_level": "raid1", 00:12:45.763 "superblock": false, 00:12:45.763 "num_base_bdevs": 4, 00:12:45.763 "num_base_bdevs_discovered": 3, 00:12:45.763 "num_base_bdevs_operational": 3, 00:12:45.763 "process": { 00:12:45.763 "type": "rebuild", 00:12:45.763 "target": "spare", 00:12:45.763 "progress": { 00:12:45.763 "blocks": 26624, 00:12:45.763 "percent": 40 00:12:45.763 } 00:12:45.763 }, 00:12:45.763 "base_bdevs_list": [ 00:12:45.763 { 00:12:45.763 "name": "spare", 00:12:45.763 "uuid": "2297af1a-dc7f-5066-bfa8-82bda3967c8b", 00:12:45.763 "is_configured": true, 00:12:45.763 "data_offset": 0, 00:12:45.763 "data_size": 65536 00:12:45.763 }, 00:12:45.763 { 00:12:45.763 "name": null, 00:12:45.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.763 "is_configured": false, 00:12:45.763 "data_offset": 0, 00:12:45.763 "data_size": 65536 00:12:45.763 }, 00:12:45.763 { 00:12:45.763 "name": "BaseBdev3", 00:12:45.763 "uuid": "c69547a3-9142-555d-8c15-18b63e840c2a", 00:12:45.763 "is_configured": true, 00:12:45.763 "data_offset": 0, 00:12:45.763 "data_size": 65536 00:12:45.763 }, 00:12:45.763 { 00:12:45.763 "name": "BaseBdev4", 00:12:45.763 "uuid": "87ddd7a6-f253-5957-b4f9-dab8d0073523", 00:12:45.763 "is_configured": true, 00:12:45.763 "data_offset": 0, 00:12:45.763 "data_size": 65536 00:12:45.763 } 00:12:45.763 ] 00:12:45.763 }' 00:12:45.763 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.763 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:45.763 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.763 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:45.763 12:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:46.702 12:32:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:46.702 12:32:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:46.702 12:32:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.702 12:32:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:46.702 12:32:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:46.702 12:32:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.702 12:32:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.702 12:32:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.702 12:32:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.702 12:32:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.702 12:32:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.702 12:32:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.702 "name": "raid_bdev1", 00:12:46.702 "uuid": "d8bea2e1-42e0-49e7-a0b6-1198ba326549", 00:12:46.702 "strip_size_kb": 0, 00:12:46.702 "state": "online", 00:12:46.702 "raid_level": "raid1", 00:12:46.702 "superblock": false, 00:12:46.702 "num_base_bdevs": 4, 00:12:46.702 "num_base_bdevs_discovered": 3, 00:12:46.702 "num_base_bdevs_operational": 3, 00:12:46.702 "process": { 00:12:46.702 "type": "rebuild", 00:12:46.702 "target": "spare", 00:12:46.702 "progress": { 00:12:46.702 "blocks": 49152, 00:12:46.702 "percent": 75 00:12:46.702 } 00:12:46.702 }, 00:12:46.702 "base_bdevs_list": [ 00:12:46.702 { 00:12:46.702 "name": "spare", 00:12:46.702 "uuid": "2297af1a-dc7f-5066-bfa8-82bda3967c8b", 00:12:46.702 "is_configured": true, 00:12:46.702 "data_offset": 0, 00:12:46.702 "data_size": 65536 00:12:46.702 }, 00:12:46.702 { 00:12:46.702 "name": null, 00:12:46.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.702 "is_configured": false, 00:12:46.702 "data_offset": 0, 00:12:46.702 "data_size": 65536 00:12:46.702 }, 00:12:46.702 { 00:12:46.702 "name": "BaseBdev3", 00:12:46.702 "uuid": "c69547a3-9142-555d-8c15-18b63e840c2a", 00:12:46.702 "is_configured": true, 00:12:46.702 "data_offset": 0, 00:12:46.702 "data_size": 65536 00:12:46.702 }, 00:12:46.702 { 00:12:46.702 "name": "BaseBdev4", 00:12:46.702 "uuid": "87ddd7a6-f253-5957-b4f9-dab8d0073523", 00:12:46.702 "is_configured": true, 00:12:46.702 "data_offset": 0, 00:12:46.702 "data_size": 65536 00:12:46.702 } 00:12:46.702 ] 00:12:46.702 }' 00:12:46.702 12:32:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.702 12:32:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:46.962 12:32:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.962 12:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:46.962 12:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:47.541 [2024-11-19 12:32:52.576656] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:47.541 [2024-11-19 12:32:52.576955] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:47.541 [2024-11-19 12:32:52.577040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.816 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:47.816 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:47.816 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.816 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:47.816 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:47.816 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.816 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.816 12:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.816 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.816 12:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.816 12:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.816 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.816 "name": "raid_bdev1", 00:12:47.816 "uuid": "d8bea2e1-42e0-49e7-a0b6-1198ba326549", 00:12:47.816 "strip_size_kb": 0, 00:12:47.816 "state": "online", 00:12:47.816 "raid_level": "raid1", 00:12:47.816 "superblock": false, 00:12:47.816 "num_base_bdevs": 4, 00:12:47.816 "num_base_bdevs_discovered": 3, 00:12:47.816 "num_base_bdevs_operational": 3, 00:12:47.816 "base_bdevs_list": [ 00:12:47.816 { 00:12:47.816 "name": "spare", 00:12:47.816 "uuid": "2297af1a-dc7f-5066-bfa8-82bda3967c8b", 00:12:47.816 "is_configured": true, 00:12:47.816 "data_offset": 0, 00:12:47.816 "data_size": 65536 00:12:47.816 }, 00:12:47.816 { 00:12:47.816 "name": null, 00:12:47.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.816 "is_configured": false, 00:12:47.816 "data_offset": 0, 00:12:47.816 "data_size": 65536 00:12:47.816 }, 00:12:47.816 { 00:12:47.816 "name": "BaseBdev3", 00:12:47.816 "uuid": "c69547a3-9142-555d-8c15-18b63e840c2a", 00:12:47.816 "is_configured": true, 00:12:47.816 "data_offset": 0, 00:12:47.816 "data_size": 65536 00:12:47.816 }, 00:12:47.816 { 00:12:47.816 "name": "BaseBdev4", 00:12:47.816 "uuid": "87ddd7a6-f253-5957-b4f9-dab8d0073523", 00:12:47.816 "is_configured": true, 00:12:47.816 "data_offset": 0, 00:12:47.816 "data_size": 65536 00:12:47.816 } 00:12:47.816 ] 00:12:47.816 }' 00:12:47.816 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.076 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:48.076 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.076 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:48.076 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:48.076 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:48.076 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.076 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:48.076 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:48.076 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.076 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.076 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.076 12:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.076 12:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.076 12:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.076 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.076 "name": "raid_bdev1", 00:12:48.076 "uuid": "d8bea2e1-42e0-49e7-a0b6-1198ba326549", 00:12:48.076 "strip_size_kb": 0, 00:12:48.076 "state": "online", 00:12:48.076 "raid_level": "raid1", 00:12:48.076 "superblock": false, 00:12:48.076 "num_base_bdevs": 4, 00:12:48.076 "num_base_bdevs_discovered": 3, 00:12:48.076 "num_base_bdevs_operational": 3, 00:12:48.076 "base_bdevs_list": [ 00:12:48.076 { 00:12:48.076 "name": "spare", 00:12:48.076 "uuid": "2297af1a-dc7f-5066-bfa8-82bda3967c8b", 00:12:48.076 "is_configured": true, 00:12:48.076 "data_offset": 0, 00:12:48.076 "data_size": 65536 00:12:48.076 }, 00:12:48.076 { 00:12:48.076 "name": null, 00:12:48.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.076 "is_configured": false, 00:12:48.076 "data_offset": 0, 00:12:48.076 "data_size": 65536 00:12:48.076 }, 00:12:48.076 { 00:12:48.076 "name": "BaseBdev3", 00:12:48.076 "uuid": "c69547a3-9142-555d-8c15-18b63e840c2a", 00:12:48.076 "is_configured": true, 00:12:48.076 "data_offset": 0, 00:12:48.076 "data_size": 65536 00:12:48.076 }, 00:12:48.076 { 00:12:48.076 "name": "BaseBdev4", 00:12:48.076 "uuid": "87ddd7a6-f253-5957-b4f9-dab8d0073523", 00:12:48.076 "is_configured": true, 00:12:48.076 "data_offset": 0, 00:12:48.076 "data_size": 65536 00:12:48.076 } 00:12:48.076 ] 00:12:48.076 }' 00:12:48.076 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.076 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:48.076 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.076 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:48.076 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:48.076 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.076 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.076 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.077 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.077 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:48.077 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.077 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.077 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.077 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.077 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.077 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.077 12:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.077 12:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.077 12:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.077 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.077 "name": "raid_bdev1", 00:12:48.077 "uuid": "d8bea2e1-42e0-49e7-a0b6-1198ba326549", 00:12:48.077 "strip_size_kb": 0, 00:12:48.077 "state": "online", 00:12:48.077 "raid_level": "raid1", 00:12:48.077 "superblock": false, 00:12:48.077 "num_base_bdevs": 4, 00:12:48.077 "num_base_bdevs_discovered": 3, 00:12:48.077 "num_base_bdevs_operational": 3, 00:12:48.077 "base_bdevs_list": [ 00:12:48.077 { 00:12:48.077 "name": "spare", 00:12:48.077 "uuid": "2297af1a-dc7f-5066-bfa8-82bda3967c8b", 00:12:48.077 "is_configured": true, 00:12:48.077 "data_offset": 0, 00:12:48.077 "data_size": 65536 00:12:48.077 }, 00:12:48.077 { 00:12:48.077 "name": null, 00:12:48.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.077 "is_configured": false, 00:12:48.077 "data_offset": 0, 00:12:48.077 "data_size": 65536 00:12:48.077 }, 00:12:48.077 { 00:12:48.077 "name": "BaseBdev3", 00:12:48.077 "uuid": "c69547a3-9142-555d-8c15-18b63e840c2a", 00:12:48.077 "is_configured": true, 00:12:48.077 "data_offset": 0, 00:12:48.077 "data_size": 65536 00:12:48.077 }, 00:12:48.077 { 00:12:48.077 "name": "BaseBdev4", 00:12:48.077 "uuid": "87ddd7a6-f253-5957-b4f9-dab8d0073523", 00:12:48.077 "is_configured": true, 00:12:48.077 "data_offset": 0, 00:12:48.077 "data_size": 65536 00:12:48.077 } 00:12:48.077 ] 00:12:48.077 }' 00:12:48.077 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.077 12:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.646 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:48.646 12:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.646 12:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.646 [2024-11-19 12:32:53.643512] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:48.646 [2024-11-19 12:32:53.643650] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:48.646 [2024-11-19 12:32:53.643831] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:48.646 [2024-11-19 12:32:53.643927] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:48.646 [2024-11-19 12:32:53.643943] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:48.646 12:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.646 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.646 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:48.646 12:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.646 12:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.646 12:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.646 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:48.646 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:48.646 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:48.646 12:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:48.646 12:32:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:48.646 12:32:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:48.646 12:32:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:48.646 12:32:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:48.646 12:32:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:48.646 12:32:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:48.646 12:32:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:48.646 12:32:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:48.646 12:32:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:48.646 /dev/nbd0 00:12:48.907 12:32:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:48.907 12:32:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:48.907 12:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:48.907 12:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:48.907 12:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:48.907 12:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:48.907 12:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:48.907 12:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:48.907 12:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:48.907 12:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:48.907 12:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.907 1+0 records in 00:12:48.907 1+0 records out 00:12:48.907 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275412 s, 14.9 MB/s 00:12:48.907 12:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.907 12:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:48.907 12:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.907 12:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:48.907 12:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:48.907 12:32:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:48.907 12:32:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:48.907 12:32:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:48.907 /dev/nbd1 00:12:48.907 12:32:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:48.907 12:32:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:48.907 12:32:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:48.907 12:32:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:48.907 12:32:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:48.907 12:32:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:48.907 12:32:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:48.907 12:32:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:48.907 12:32:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:48.907 12:32:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:48.907 12:32:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:49.167 1+0 records in 00:12:49.167 1+0 records out 00:12:49.167 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043787 s, 9.4 MB/s 00:12:49.167 12:32:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.167 12:32:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:49.167 12:32:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.167 12:32:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:49.167 12:32:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:49.167 12:32:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:49.167 12:32:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:49.167 12:32:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:49.167 12:32:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:49.167 12:32:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:49.167 12:32:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:49.167 12:32:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:49.167 12:32:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:49.167 12:32:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.167 12:32:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:49.427 12:32:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:49.427 12:32:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:49.427 12:32:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:49.427 12:32:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.427 12:32:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.427 12:32:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:49.427 12:32:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:49.427 12:32:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.427 12:32:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.427 12:32:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:49.686 12:32:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:49.686 12:32:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:49.686 12:32:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:49.686 12:32:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.686 12:32:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.686 12:32:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:49.686 12:32:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:49.686 12:32:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.686 12:32:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:49.686 12:32:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 88383 00:12:49.686 12:32:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 88383 ']' 00:12:49.686 12:32:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 88383 00:12:49.686 12:32:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:12:49.686 12:32:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:49.686 12:32:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88383 00:12:49.686 12:32:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:49.686 12:32:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:49.686 12:32:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88383' 00:12:49.686 killing process with pid 88383 00:12:49.686 Received shutdown signal, test time was about 60.000000 seconds 00:12:49.686 00:12:49.686 Latency(us) 00:12:49.686 [2024-11-19T12:32:54.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:49.686 [2024-11-19T12:32:54.947Z] =================================================================================================================== 00:12:49.686 [2024-11-19T12:32:54.947Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:49.686 12:32:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 88383 00:12:49.686 [2024-11-19 12:32:54.752297] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:49.686 12:32:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 88383 00:12:49.686 [2024-11-19 12:32:54.803953] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:49.946 00:12:49.946 real 0m15.949s 00:12:49.946 user 0m17.394s 00:12:49.946 sys 0m3.651s 00:12:49.946 ************************************ 00:12:49.946 END TEST raid_rebuild_test 00:12:49.946 ************************************ 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.946 12:32:55 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:12:49.946 12:32:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:49.946 12:32:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:49.946 12:32:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:49.946 ************************************ 00:12:49.946 START TEST raid_rebuild_test_sb 00:12:49.946 ************************************ 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=88820 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 88820 00:12:49.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 88820 ']' 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.946 12:32:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:49.947 12:32:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.206 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:50.206 Zero copy mechanism will not be used. 00:12:50.206 [2024-11-19 12:32:55.221093] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:50.206 [2024-11-19 12:32:55.221224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88820 ] 00:12:50.206 [2024-11-19 12:32:55.382385] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.206 [2024-11-19 12:32:55.433017] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.467 [2024-11-19 12:32:55.475239] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:50.467 [2024-11-19 12:32:55.475277] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:51.035 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:51.035 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:51.035 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:51.035 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:51.035 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.035 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.035 BaseBdev1_malloc 00:12:51.035 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.035 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:51.035 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.035 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.035 [2024-11-19 12:32:56.062053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:51.035 [2024-11-19 12:32:56.062132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.035 [2024-11-19 12:32:56.062169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:51.035 [2024-11-19 12:32:56.062185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.035 [2024-11-19 12:32:56.064348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.035 [2024-11-19 12:32:56.064467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:51.035 BaseBdev1 00:12:51.035 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.035 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:51.035 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:51.035 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.035 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.035 BaseBdev2_malloc 00:12:51.035 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.035 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:51.035 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.035 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.035 [2024-11-19 12:32:56.101675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:51.035 [2024-11-19 12:32:56.101818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.035 [2024-11-19 12:32:56.101843] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:51.035 [2024-11-19 12:32:56.101852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.035 [2024-11-19 12:32:56.103976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.035 [2024-11-19 12:32:56.104006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:51.035 BaseBdev2 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.036 BaseBdev3_malloc 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.036 [2024-11-19 12:32:56.130229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:51.036 [2024-11-19 12:32:56.130281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.036 [2024-11-19 12:32:56.130304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:51.036 [2024-11-19 12:32:56.130313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.036 [2024-11-19 12:32:56.132395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.036 [2024-11-19 12:32:56.132432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:51.036 BaseBdev3 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.036 BaseBdev4_malloc 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.036 [2024-11-19 12:32:56.158798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:51.036 [2024-11-19 12:32:56.158855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.036 [2024-11-19 12:32:56.158880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:51.036 [2024-11-19 12:32:56.158890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.036 [2024-11-19 12:32:56.160975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.036 [2024-11-19 12:32:56.161012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:51.036 BaseBdev4 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.036 spare_malloc 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.036 spare_delay 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.036 [2024-11-19 12:32:56.199512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:51.036 [2024-11-19 12:32:56.199568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.036 [2024-11-19 12:32:56.199589] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:51.036 [2024-11-19 12:32:56.199598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.036 [2024-11-19 12:32:56.201671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.036 [2024-11-19 12:32:56.201708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:51.036 spare 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.036 [2024-11-19 12:32:56.211594] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:51.036 [2024-11-19 12:32:56.213502] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:51.036 [2024-11-19 12:32:56.213573] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:51.036 [2024-11-19 12:32:56.213618] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:51.036 [2024-11-19 12:32:56.213807] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:51.036 [2024-11-19 12:32:56.213820] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:51.036 [2024-11-19 12:32:56.214074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:51.036 [2024-11-19 12:32:56.214227] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:51.036 [2024-11-19 12:32:56.214249] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:51.036 [2024-11-19 12:32:56.214388] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.036 "name": "raid_bdev1", 00:12:51.036 "uuid": "40cd8f3a-fe1e-4d04-8433-44b5f581e9fa", 00:12:51.036 "strip_size_kb": 0, 00:12:51.036 "state": "online", 00:12:51.036 "raid_level": "raid1", 00:12:51.036 "superblock": true, 00:12:51.036 "num_base_bdevs": 4, 00:12:51.036 "num_base_bdevs_discovered": 4, 00:12:51.036 "num_base_bdevs_operational": 4, 00:12:51.036 "base_bdevs_list": [ 00:12:51.036 { 00:12:51.036 "name": "BaseBdev1", 00:12:51.036 "uuid": "695e138b-031f-5b16-b11e-72cdfa6fd8de", 00:12:51.036 "is_configured": true, 00:12:51.036 "data_offset": 2048, 00:12:51.036 "data_size": 63488 00:12:51.036 }, 00:12:51.036 { 00:12:51.036 "name": "BaseBdev2", 00:12:51.036 "uuid": "8660bb56-b1eb-5f8b-a8af-1f5e4ed0bb3f", 00:12:51.036 "is_configured": true, 00:12:51.036 "data_offset": 2048, 00:12:51.036 "data_size": 63488 00:12:51.036 }, 00:12:51.036 { 00:12:51.036 "name": "BaseBdev3", 00:12:51.036 "uuid": "33328dd6-2582-5cff-afa9-9d6f6b68e0a5", 00:12:51.036 "is_configured": true, 00:12:51.036 "data_offset": 2048, 00:12:51.036 "data_size": 63488 00:12:51.036 }, 00:12:51.036 { 00:12:51.036 "name": "BaseBdev4", 00:12:51.036 "uuid": "ea092fbb-b815-5de5-810d-c34b87f29e0d", 00:12:51.036 "is_configured": true, 00:12:51.036 "data_offset": 2048, 00:12:51.036 "data_size": 63488 00:12:51.036 } 00:12:51.036 ] 00:12:51.036 }' 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.036 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.604 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:51.604 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:51.604 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.604 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.604 [2024-11-19 12:32:56.635199] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:51.604 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.604 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:51.604 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:51.604 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.604 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.604 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.604 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.604 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:51.604 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:51.604 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:51.604 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:51.604 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:51.604 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:51.604 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:51.604 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:51.604 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:51.604 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:51.604 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:51.604 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:51.604 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:51.604 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:51.864 [2024-11-19 12:32:56.894497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:51.864 /dev/nbd0 00:12:51.864 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:51.864 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:51.864 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:51.864 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:51.864 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:51.864 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:51.864 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:51.864 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:51.864 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:51.864 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:51.864 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.864 1+0 records in 00:12:51.864 1+0 records out 00:12:51.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207142 s, 19.8 MB/s 00:12:51.864 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.865 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:51.865 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.865 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:51.865 12:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:51.865 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:51.865 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:51.865 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:51.865 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:51.865 12:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:57.143 63488+0 records in 00:12:57.143 63488+0 records out 00:12:57.143 32505856 bytes (33 MB, 31 MiB) copied, 5.42949 s, 6.0 MB/s 00:12:57.143 12:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:57.143 12:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:57.143 12:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:57.143 12:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:57.143 12:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:57.143 12:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.143 12:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:57.403 12:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:57.403 [2024-11-19 12:33:02.631871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.403 12:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:57.403 12:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:57.403 12:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.403 12:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.403 12:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:57.403 12:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:57.403 12:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.403 12:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:57.403 12:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.403 12:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.403 [2024-11-19 12:33:02.646858] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:57.403 12:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.403 12:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:57.403 12:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.403 12:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.403 12:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:57.403 12:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:57.403 12:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:57.403 12:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.403 12:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.403 12:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.403 12:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.403 12:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.403 12:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.403 12:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.403 12:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.663 12:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.663 12:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.663 "name": "raid_bdev1", 00:12:57.663 "uuid": "40cd8f3a-fe1e-4d04-8433-44b5f581e9fa", 00:12:57.664 "strip_size_kb": 0, 00:12:57.664 "state": "online", 00:12:57.664 "raid_level": "raid1", 00:12:57.664 "superblock": true, 00:12:57.664 "num_base_bdevs": 4, 00:12:57.664 "num_base_bdevs_discovered": 3, 00:12:57.664 "num_base_bdevs_operational": 3, 00:12:57.664 "base_bdevs_list": [ 00:12:57.664 { 00:12:57.664 "name": null, 00:12:57.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.664 "is_configured": false, 00:12:57.664 "data_offset": 0, 00:12:57.664 "data_size": 63488 00:12:57.664 }, 00:12:57.664 { 00:12:57.664 "name": "BaseBdev2", 00:12:57.664 "uuid": "8660bb56-b1eb-5f8b-a8af-1f5e4ed0bb3f", 00:12:57.664 "is_configured": true, 00:12:57.664 "data_offset": 2048, 00:12:57.664 "data_size": 63488 00:12:57.664 }, 00:12:57.664 { 00:12:57.664 "name": "BaseBdev3", 00:12:57.664 "uuid": "33328dd6-2582-5cff-afa9-9d6f6b68e0a5", 00:12:57.664 "is_configured": true, 00:12:57.664 "data_offset": 2048, 00:12:57.664 "data_size": 63488 00:12:57.664 }, 00:12:57.664 { 00:12:57.664 "name": "BaseBdev4", 00:12:57.664 "uuid": "ea092fbb-b815-5de5-810d-c34b87f29e0d", 00:12:57.664 "is_configured": true, 00:12:57.664 "data_offset": 2048, 00:12:57.664 "data_size": 63488 00:12:57.664 } 00:12:57.664 ] 00:12:57.664 }' 00:12:57.664 12:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.664 12:33:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.924 12:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:57.924 12:33:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.924 12:33:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.924 [2024-11-19 12:33:03.066512] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:57.924 [2024-11-19 12:33:03.070065] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:57.924 [2024-11-19 12:33:03.072063] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:57.924 12:33:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.924 12:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:58.871 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:58.871 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.871 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:58.871 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:58.871 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.871 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.871 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.871 12:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.871 12:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.871 12:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.871 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.871 "name": "raid_bdev1", 00:12:58.871 "uuid": "40cd8f3a-fe1e-4d04-8433-44b5f581e9fa", 00:12:58.871 "strip_size_kb": 0, 00:12:58.871 "state": "online", 00:12:58.871 "raid_level": "raid1", 00:12:58.871 "superblock": true, 00:12:58.871 "num_base_bdevs": 4, 00:12:58.871 "num_base_bdevs_discovered": 4, 00:12:58.871 "num_base_bdevs_operational": 4, 00:12:58.871 "process": { 00:12:58.871 "type": "rebuild", 00:12:58.871 "target": "spare", 00:12:58.871 "progress": { 00:12:58.871 "blocks": 20480, 00:12:58.871 "percent": 32 00:12:58.871 } 00:12:58.871 }, 00:12:58.871 "base_bdevs_list": [ 00:12:58.871 { 00:12:58.871 "name": "spare", 00:12:58.871 "uuid": "1caa4a07-0b15-5b67-b75a-7d97a10b79e0", 00:12:58.871 "is_configured": true, 00:12:58.871 "data_offset": 2048, 00:12:58.871 "data_size": 63488 00:12:58.871 }, 00:12:58.871 { 00:12:58.871 "name": "BaseBdev2", 00:12:58.871 "uuid": "8660bb56-b1eb-5f8b-a8af-1f5e4ed0bb3f", 00:12:58.871 "is_configured": true, 00:12:58.871 "data_offset": 2048, 00:12:58.871 "data_size": 63488 00:12:58.871 }, 00:12:58.871 { 00:12:58.871 "name": "BaseBdev3", 00:12:58.871 "uuid": "33328dd6-2582-5cff-afa9-9d6f6b68e0a5", 00:12:58.871 "is_configured": true, 00:12:58.871 "data_offset": 2048, 00:12:58.871 "data_size": 63488 00:12:58.871 }, 00:12:58.871 { 00:12:58.871 "name": "BaseBdev4", 00:12:58.871 "uuid": "ea092fbb-b815-5de5-810d-c34b87f29e0d", 00:12:58.871 "is_configured": true, 00:12:58.871 "data_offset": 2048, 00:12:58.871 "data_size": 63488 00:12:58.871 } 00:12:58.871 ] 00:12:58.871 }' 00:12:59.131 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.131 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:59.131 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.131 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:59.131 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:59.131 12:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.131 12:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.131 [2024-11-19 12:33:04.210980] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:59.131 [2024-11-19 12:33:04.277049] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:59.131 [2024-11-19 12:33:04.277171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.131 [2024-11-19 12:33:04.277213] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:59.131 [2024-11-19 12:33:04.277234] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:59.131 12:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.131 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:59.131 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.131 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.131 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.131 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.131 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:59.131 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.131 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.131 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.131 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.131 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.131 12:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.131 12:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.131 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.131 12:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.131 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.131 "name": "raid_bdev1", 00:12:59.131 "uuid": "40cd8f3a-fe1e-4d04-8433-44b5f581e9fa", 00:12:59.131 "strip_size_kb": 0, 00:12:59.131 "state": "online", 00:12:59.131 "raid_level": "raid1", 00:12:59.131 "superblock": true, 00:12:59.131 "num_base_bdevs": 4, 00:12:59.131 "num_base_bdevs_discovered": 3, 00:12:59.131 "num_base_bdevs_operational": 3, 00:12:59.131 "base_bdevs_list": [ 00:12:59.131 { 00:12:59.131 "name": null, 00:12:59.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.131 "is_configured": false, 00:12:59.131 "data_offset": 0, 00:12:59.131 "data_size": 63488 00:12:59.131 }, 00:12:59.131 { 00:12:59.131 "name": "BaseBdev2", 00:12:59.131 "uuid": "8660bb56-b1eb-5f8b-a8af-1f5e4ed0bb3f", 00:12:59.131 "is_configured": true, 00:12:59.131 "data_offset": 2048, 00:12:59.131 "data_size": 63488 00:12:59.131 }, 00:12:59.131 { 00:12:59.131 "name": "BaseBdev3", 00:12:59.131 "uuid": "33328dd6-2582-5cff-afa9-9d6f6b68e0a5", 00:12:59.131 "is_configured": true, 00:12:59.131 "data_offset": 2048, 00:12:59.131 "data_size": 63488 00:12:59.131 }, 00:12:59.131 { 00:12:59.131 "name": "BaseBdev4", 00:12:59.131 "uuid": "ea092fbb-b815-5de5-810d-c34b87f29e0d", 00:12:59.131 "is_configured": true, 00:12:59.131 "data_offset": 2048, 00:12:59.131 "data_size": 63488 00:12:59.132 } 00:12:59.132 ] 00:12:59.132 }' 00:12:59.132 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.132 12:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.702 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:59.702 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.702 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:59.702 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:59.702 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.702 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.702 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.702 12:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.702 12:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.702 12:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.702 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.702 "name": "raid_bdev1", 00:12:59.702 "uuid": "40cd8f3a-fe1e-4d04-8433-44b5f581e9fa", 00:12:59.702 "strip_size_kb": 0, 00:12:59.702 "state": "online", 00:12:59.702 "raid_level": "raid1", 00:12:59.702 "superblock": true, 00:12:59.702 "num_base_bdevs": 4, 00:12:59.702 "num_base_bdevs_discovered": 3, 00:12:59.702 "num_base_bdevs_operational": 3, 00:12:59.702 "base_bdevs_list": [ 00:12:59.702 { 00:12:59.702 "name": null, 00:12:59.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.702 "is_configured": false, 00:12:59.702 "data_offset": 0, 00:12:59.702 "data_size": 63488 00:12:59.702 }, 00:12:59.702 { 00:12:59.702 "name": "BaseBdev2", 00:12:59.702 "uuid": "8660bb56-b1eb-5f8b-a8af-1f5e4ed0bb3f", 00:12:59.702 "is_configured": true, 00:12:59.702 "data_offset": 2048, 00:12:59.702 "data_size": 63488 00:12:59.702 }, 00:12:59.702 { 00:12:59.702 "name": "BaseBdev3", 00:12:59.702 "uuid": "33328dd6-2582-5cff-afa9-9d6f6b68e0a5", 00:12:59.702 "is_configured": true, 00:12:59.702 "data_offset": 2048, 00:12:59.702 "data_size": 63488 00:12:59.702 }, 00:12:59.702 { 00:12:59.702 "name": "BaseBdev4", 00:12:59.702 "uuid": "ea092fbb-b815-5de5-810d-c34b87f29e0d", 00:12:59.702 "is_configured": true, 00:12:59.702 "data_offset": 2048, 00:12:59.702 "data_size": 63488 00:12:59.702 } 00:12:59.702 ] 00:12:59.702 }' 00:12:59.702 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.702 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:59.702 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.702 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:59.702 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:59.702 12:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.702 12:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.702 [2024-11-19 12:33:04.832435] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:59.702 [2024-11-19 12:33:04.835756] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:59.702 [2024-11-19 12:33:04.837668] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:59.702 12:33:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.702 12:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:00.642 12:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:00.642 12:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.642 12:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:00.642 12:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:00.642 12:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.642 12:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.642 12:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.642 12:33:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.642 12:33:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.642 12:33:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.642 12:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.642 "name": "raid_bdev1", 00:13:00.642 "uuid": "40cd8f3a-fe1e-4d04-8433-44b5f581e9fa", 00:13:00.642 "strip_size_kb": 0, 00:13:00.642 "state": "online", 00:13:00.642 "raid_level": "raid1", 00:13:00.642 "superblock": true, 00:13:00.642 "num_base_bdevs": 4, 00:13:00.642 "num_base_bdevs_discovered": 4, 00:13:00.642 "num_base_bdevs_operational": 4, 00:13:00.642 "process": { 00:13:00.642 "type": "rebuild", 00:13:00.642 "target": "spare", 00:13:00.642 "progress": { 00:13:00.642 "blocks": 20480, 00:13:00.642 "percent": 32 00:13:00.642 } 00:13:00.642 }, 00:13:00.642 "base_bdevs_list": [ 00:13:00.642 { 00:13:00.642 "name": "spare", 00:13:00.642 "uuid": "1caa4a07-0b15-5b67-b75a-7d97a10b79e0", 00:13:00.642 "is_configured": true, 00:13:00.642 "data_offset": 2048, 00:13:00.642 "data_size": 63488 00:13:00.642 }, 00:13:00.642 { 00:13:00.642 "name": "BaseBdev2", 00:13:00.642 "uuid": "8660bb56-b1eb-5f8b-a8af-1f5e4ed0bb3f", 00:13:00.642 "is_configured": true, 00:13:00.642 "data_offset": 2048, 00:13:00.642 "data_size": 63488 00:13:00.642 }, 00:13:00.642 { 00:13:00.642 "name": "BaseBdev3", 00:13:00.642 "uuid": "33328dd6-2582-5cff-afa9-9d6f6b68e0a5", 00:13:00.642 "is_configured": true, 00:13:00.642 "data_offset": 2048, 00:13:00.642 "data_size": 63488 00:13:00.642 }, 00:13:00.642 { 00:13:00.642 "name": "BaseBdev4", 00:13:00.642 "uuid": "ea092fbb-b815-5de5-810d-c34b87f29e0d", 00:13:00.642 "is_configured": true, 00:13:00.642 "data_offset": 2048, 00:13:00.642 "data_size": 63488 00:13:00.642 } 00:13:00.642 ] 00:13:00.642 }' 00:13:00.902 12:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.902 12:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:00.902 12:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.902 12:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:00.902 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:00.902 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:00.902 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:00.902 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:00.902 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:00.902 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:00.902 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:00.902 12:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.902 12:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.902 [2024-11-19 12:33:06.008293] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:00.902 [2024-11-19 12:33:06.142013] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca3430 00:13:00.902 12:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.902 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:00.902 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:00.902 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:00.902 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.902 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:00.902 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:00.902 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.902 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.902 12:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.902 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.902 12:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.163 12:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.163 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.163 "name": "raid_bdev1", 00:13:01.163 "uuid": "40cd8f3a-fe1e-4d04-8433-44b5f581e9fa", 00:13:01.163 "strip_size_kb": 0, 00:13:01.163 "state": "online", 00:13:01.163 "raid_level": "raid1", 00:13:01.163 "superblock": true, 00:13:01.163 "num_base_bdevs": 4, 00:13:01.163 "num_base_bdevs_discovered": 3, 00:13:01.163 "num_base_bdevs_operational": 3, 00:13:01.163 "process": { 00:13:01.163 "type": "rebuild", 00:13:01.163 "target": "spare", 00:13:01.163 "progress": { 00:13:01.163 "blocks": 24576, 00:13:01.163 "percent": 38 00:13:01.163 } 00:13:01.163 }, 00:13:01.163 "base_bdevs_list": [ 00:13:01.163 { 00:13:01.163 "name": "spare", 00:13:01.163 "uuid": "1caa4a07-0b15-5b67-b75a-7d97a10b79e0", 00:13:01.163 "is_configured": true, 00:13:01.163 "data_offset": 2048, 00:13:01.163 "data_size": 63488 00:13:01.163 }, 00:13:01.163 { 00:13:01.163 "name": null, 00:13:01.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.163 "is_configured": false, 00:13:01.163 "data_offset": 0, 00:13:01.163 "data_size": 63488 00:13:01.163 }, 00:13:01.163 { 00:13:01.163 "name": "BaseBdev3", 00:13:01.163 "uuid": "33328dd6-2582-5cff-afa9-9d6f6b68e0a5", 00:13:01.163 "is_configured": true, 00:13:01.163 "data_offset": 2048, 00:13:01.163 "data_size": 63488 00:13:01.163 }, 00:13:01.163 { 00:13:01.163 "name": "BaseBdev4", 00:13:01.163 "uuid": "ea092fbb-b815-5de5-810d-c34b87f29e0d", 00:13:01.163 "is_configured": true, 00:13:01.163 "data_offset": 2048, 00:13:01.163 "data_size": 63488 00:13:01.163 } 00:13:01.163 ] 00:13:01.163 }' 00:13:01.163 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.163 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:01.163 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.163 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:01.163 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=379 00:13:01.163 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:01.163 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:01.163 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.163 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:01.163 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:01.163 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.163 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.163 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.163 12:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.163 12:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.163 12:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.163 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.163 "name": "raid_bdev1", 00:13:01.163 "uuid": "40cd8f3a-fe1e-4d04-8433-44b5f581e9fa", 00:13:01.163 "strip_size_kb": 0, 00:13:01.163 "state": "online", 00:13:01.163 "raid_level": "raid1", 00:13:01.163 "superblock": true, 00:13:01.163 "num_base_bdevs": 4, 00:13:01.163 "num_base_bdevs_discovered": 3, 00:13:01.163 "num_base_bdevs_operational": 3, 00:13:01.163 "process": { 00:13:01.163 "type": "rebuild", 00:13:01.163 "target": "spare", 00:13:01.163 "progress": { 00:13:01.163 "blocks": 26624, 00:13:01.163 "percent": 41 00:13:01.163 } 00:13:01.163 }, 00:13:01.163 "base_bdevs_list": [ 00:13:01.163 { 00:13:01.163 "name": "spare", 00:13:01.163 "uuid": "1caa4a07-0b15-5b67-b75a-7d97a10b79e0", 00:13:01.163 "is_configured": true, 00:13:01.163 "data_offset": 2048, 00:13:01.163 "data_size": 63488 00:13:01.163 }, 00:13:01.163 { 00:13:01.163 "name": null, 00:13:01.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.163 "is_configured": false, 00:13:01.163 "data_offset": 0, 00:13:01.163 "data_size": 63488 00:13:01.163 }, 00:13:01.163 { 00:13:01.163 "name": "BaseBdev3", 00:13:01.163 "uuid": "33328dd6-2582-5cff-afa9-9d6f6b68e0a5", 00:13:01.163 "is_configured": true, 00:13:01.163 "data_offset": 2048, 00:13:01.163 "data_size": 63488 00:13:01.163 }, 00:13:01.163 { 00:13:01.163 "name": "BaseBdev4", 00:13:01.163 "uuid": "ea092fbb-b815-5de5-810d-c34b87f29e0d", 00:13:01.163 "is_configured": true, 00:13:01.164 "data_offset": 2048, 00:13:01.164 "data_size": 63488 00:13:01.164 } 00:13:01.164 ] 00:13:01.164 }' 00:13:01.164 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.164 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:01.164 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.164 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:01.164 12:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:02.542 12:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:02.542 12:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.542 12:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.542 12:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.542 12:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.542 12:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.542 12:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.542 12:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.542 12:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.542 12:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.542 12:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.542 12:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.542 "name": "raid_bdev1", 00:13:02.542 "uuid": "40cd8f3a-fe1e-4d04-8433-44b5f581e9fa", 00:13:02.542 "strip_size_kb": 0, 00:13:02.542 "state": "online", 00:13:02.542 "raid_level": "raid1", 00:13:02.542 "superblock": true, 00:13:02.542 "num_base_bdevs": 4, 00:13:02.542 "num_base_bdevs_discovered": 3, 00:13:02.542 "num_base_bdevs_operational": 3, 00:13:02.542 "process": { 00:13:02.542 "type": "rebuild", 00:13:02.542 "target": "spare", 00:13:02.542 "progress": { 00:13:02.542 "blocks": 49152, 00:13:02.542 "percent": 77 00:13:02.542 } 00:13:02.542 }, 00:13:02.542 "base_bdevs_list": [ 00:13:02.542 { 00:13:02.542 "name": "spare", 00:13:02.542 "uuid": "1caa4a07-0b15-5b67-b75a-7d97a10b79e0", 00:13:02.542 "is_configured": true, 00:13:02.542 "data_offset": 2048, 00:13:02.542 "data_size": 63488 00:13:02.542 }, 00:13:02.542 { 00:13:02.542 "name": null, 00:13:02.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.542 "is_configured": false, 00:13:02.542 "data_offset": 0, 00:13:02.542 "data_size": 63488 00:13:02.542 }, 00:13:02.542 { 00:13:02.542 "name": "BaseBdev3", 00:13:02.542 "uuid": "33328dd6-2582-5cff-afa9-9d6f6b68e0a5", 00:13:02.542 "is_configured": true, 00:13:02.542 "data_offset": 2048, 00:13:02.542 "data_size": 63488 00:13:02.542 }, 00:13:02.542 { 00:13:02.542 "name": "BaseBdev4", 00:13:02.542 "uuid": "ea092fbb-b815-5de5-810d-c34b87f29e0d", 00:13:02.542 "is_configured": true, 00:13:02.542 "data_offset": 2048, 00:13:02.542 "data_size": 63488 00:13:02.542 } 00:13:02.542 ] 00:13:02.542 }' 00:13:02.542 12:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.542 12:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.542 12:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.542 12:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.542 12:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:02.802 [2024-11-19 12:33:08.049018] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:02.802 [2024-11-19 12:33:08.049213] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:02.802 [2024-11-19 12:33:08.049356] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.371 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:03.371 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:03.371 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.371 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:03.371 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:03.371 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.371 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.371 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.371 12:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.371 12:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.371 12:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.371 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.371 "name": "raid_bdev1", 00:13:03.371 "uuid": "40cd8f3a-fe1e-4d04-8433-44b5f581e9fa", 00:13:03.371 "strip_size_kb": 0, 00:13:03.371 "state": "online", 00:13:03.371 "raid_level": "raid1", 00:13:03.371 "superblock": true, 00:13:03.371 "num_base_bdevs": 4, 00:13:03.371 "num_base_bdevs_discovered": 3, 00:13:03.371 "num_base_bdevs_operational": 3, 00:13:03.371 "base_bdevs_list": [ 00:13:03.371 { 00:13:03.371 "name": "spare", 00:13:03.371 "uuid": "1caa4a07-0b15-5b67-b75a-7d97a10b79e0", 00:13:03.371 "is_configured": true, 00:13:03.371 "data_offset": 2048, 00:13:03.371 "data_size": 63488 00:13:03.371 }, 00:13:03.371 { 00:13:03.371 "name": null, 00:13:03.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.372 "is_configured": false, 00:13:03.372 "data_offset": 0, 00:13:03.372 "data_size": 63488 00:13:03.372 }, 00:13:03.372 { 00:13:03.372 "name": "BaseBdev3", 00:13:03.372 "uuid": "33328dd6-2582-5cff-afa9-9d6f6b68e0a5", 00:13:03.372 "is_configured": true, 00:13:03.372 "data_offset": 2048, 00:13:03.372 "data_size": 63488 00:13:03.372 }, 00:13:03.372 { 00:13:03.372 "name": "BaseBdev4", 00:13:03.372 "uuid": "ea092fbb-b815-5de5-810d-c34b87f29e0d", 00:13:03.372 "is_configured": true, 00:13:03.372 "data_offset": 2048, 00:13:03.372 "data_size": 63488 00:13:03.372 } 00:13:03.372 ] 00:13:03.372 }' 00:13:03.372 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.632 "name": "raid_bdev1", 00:13:03.632 "uuid": "40cd8f3a-fe1e-4d04-8433-44b5f581e9fa", 00:13:03.632 "strip_size_kb": 0, 00:13:03.632 "state": "online", 00:13:03.632 "raid_level": "raid1", 00:13:03.632 "superblock": true, 00:13:03.632 "num_base_bdevs": 4, 00:13:03.632 "num_base_bdevs_discovered": 3, 00:13:03.632 "num_base_bdevs_operational": 3, 00:13:03.632 "base_bdevs_list": [ 00:13:03.632 { 00:13:03.632 "name": "spare", 00:13:03.632 "uuid": "1caa4a07-0b15-5b67-b75a-7d97a10b79e0", 00:13:03.632 "is_configured": true, 00:13:03.632 "data_offset": 2048, 00:13:03.632 "data_size": 63488 00:13:03.632 }, 00:13:03.632 { 00:13:03.632 "name": null, 00:13:03.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.632 "is_configured": false, 00:13:03.632 "data_offset": 0, 00:13:03.632 "data_size": 63488 00:13:03.632 }, 00:13:03.632 { 00:13:03.632 "name": "BaseBdev3", 00:13:03.632 "uuid": "33328dd6-2582-5cff-afa9-9d6f6b68e0a5", 00:13:03.632 "is_configured": true, 00:13:03.632 "data_offset": 2048, 00:13:03.632 "data_size": 63488 00:13:03.632 }, 00:13:03.632 { 00:13:03.632 "name": "BaseBdev4", 00:13:03.632 "uuid": "ea092fbb-b815-5de5-810d-c34b87f29e0d", 00:13:03.632 "is_configured": true, 00:13:03.632 "data_offset": 2048, 00:13:03.632 "data_size": 63488 00:13:03.632 } 00:13:03.632 ] 00:13:03.632 }' 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.632 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.632 "name": "raid_bdev1", 00:13:03.632 "uuid": "40cd8f3a-fe1e-4d04-8433-44b5f581e9fa", 00:13:03.632 "strip_size_kb": 0, 00:13:03.632 "state": "online", 00:13:03.632 "raid_level": "raid1", 00:13:03.632 "superblock": true, 00:13:03.633 "num_base_bdevs": 4, 00:13:03.633 "num_base_bdevs_discovered": 3, 00:13:03.633 "num_base_bdevs_operational": 3, 00:13:03.633 "base_bdevs_list": [ 00:13:03.633 { 00:13:03.633 "name": "spare", 00:13:03.633 "uuid": "1caa4a07-0b15-5b67-b75a-7d97a10b79e0", 00:13:03.633 "is_configured": true, 00:13:03.633 "data_offset": 2048, 00:13:03.633 "data_size": 63488 00:13:03.633 }, 00:13:03.633 { 00:13:03.633 "name": null, 00:13:03.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.633 "is_configured": false, 00:13:03.633 "data_offset": 0, 00:13:03.633 "data_size": 63488 00:13:03.633 }, 00:13:03.633 { 00:13:03.633 "name": "BaseBdev3", 00:13:03.633 "uuid": "33328dd6-2582-5cff-afa9-9d6f6b68e0a5", 00:13:03.633 "is_configured": true, 00:13:03.633 "data_offset": 2048, 00:13:03.633 "data_size": 63488 00:13:03.633 }, 00:13:03.633 { 00:13:03.633 "name": "BaseBdev4", 00:13:03.633 "uuid": "ea092fbb-b815-5de5-810d-c34b87f29e0d", 00:13:03.633 "is_configured": true, 00:13:03.633 "data_offset": 2048, 00:13:03.633 "data_size": 63488 00:13:03.633 } 00:13:03.633 ] 00:13:03.633 }' 00:13:03.633 12:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.633 12:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.202 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:04.202 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.202 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.202 [2024-11-19 12:33:09.275265] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:04.202 [2024-11-19 12:33:09.275308] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:04.202 [2024-11-19 12:33:09.275407] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:04.202 [2024-11-19 12:33:09.275504] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:04.202 [2024-11-19 12:33:09.275517] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:04.202 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.202 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.202 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:04.202 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.202 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.202 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.202 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:04.202 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:04.202 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:04.202 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:04.202 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:04.202 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:04.202 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:04.202 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:04.202 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:04.202 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:04.202 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:04.202 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:04.202 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:04.461 /dev/nbd0 00:13:04.461 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:04.461 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:04.461 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:04.461 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:04.461 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:04.461 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:04.461 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:04.461 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:04.461 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:04.461 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:04.461 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.461 1+0 records in 00:13:04.461 1+0 records out 00:13:04.461 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190189 s, 21.5 MB/s 00:13:04.461 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.461 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:04.461 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.461 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:04.461 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:04.461 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:04.461 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:04.461 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:04.722 /dev/nbd1 00:13:04.722 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:04.722 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:04.722 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:04.722 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:04.722 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:04.722 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:04.722 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:04.722 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:04.722 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:04.722 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:04.722 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.722 1+0 records in 00:13:04.722 1+0 records out 00:13:04.722 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220139 s, 18.6 MB/s 00:13:04.722 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.722 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:04.722 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.722 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:04.722 12:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:04.722 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:04.722 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:04.722 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:04.722 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:04.722 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:04.722 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:04.722 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:04.722 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:04.722 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.722 12:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:04.982 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:04.982 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:04.982 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:04.982 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.982 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.982 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:04.982 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:04.982 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.982 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.982 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:05.242 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:05.242 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:05.242 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:05.242 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.242 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.242 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:05.242 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:05.242 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.242 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:05.242 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:05.242 12:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.242 12:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.242 12:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.242 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:05.242 12:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.242 12:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.242 [2024-11-19 12:33:10.451848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:05.242 [2024-11-19 12:33:10.451915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.242 [2024-11-19 12:33:10.451944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:05.242 [2024-11-19 12:33:10.451958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.242 [2024-11-19 12:33:10.454125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.242 [2024-11-19 12:33:10.454168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:05.242 [2024-11-19 12:33:10.454252] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:05.242 [2024-11-19 12:33:10.454297] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:05.242 [2024-11-19 12:33:10.454428] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:05.242 [2024-11-19 12:33:10.454527] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:05.242 spare 00:13:05.242 12:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.242 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:05.242 12:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.242 12:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.503 [2024-11-19 12:33:10.554418] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:13:05.503 [2024-11-19 12:33:10.554526] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:05.503 [2024-11-19 12:33:10.554858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:13:05.503 [2024-11-19 12:33:10.555015] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:13:05.503 [2024-11-19 12:33:10.555026] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:13:05.503 [2024-11-19 12:33:10.555157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.503 12:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.503 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:05.503 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.503 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.503 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.503 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.503 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:05.503 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.503 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.503 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.503 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.503 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.503 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.503 12:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.503 12:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.503 12:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.503 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.503 "name": "raid_bdev1", 00:13:05.503 "uuid": "40cd8f3a-fe1e-4d04-8433-44b5f581e9fa", 00:13:05.503 "strip_size_kb": 0, 00:13:05.503 "state": "online", 00:13:05.503 "raid_level": "raid1", 00:13:05.503 "superblock": true, 00:13:05.503 "num_base_bdevs": 4, 00:13:05.503 "num_base_bdevs_discovered": 3, 00:13:05.503 "num_base_bdevs_operational": 3, 00:13:05.503 "base_bdevs_list": [ 00:13:05.503 { 00:13:05.503 "name": "spare", 00:13:05.503 "uuid": "1caa4a07-0b15-5b67-b75a-7d97a10b79e0", 00:13:05.503 "is_configured": true, 00:13:05.503 "data_offset": 2048, 00:13:05.503 "data_size": 63488 00:13:05.503 }, 00:13:05.503 { 00:13:05.503 "name": null, 00:13:05.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.503 "is_configured": false, 00:13:05.503 "data_offset": 2048, 00:13:05.503 "data_size": 63488 00:13:05.503 }, 00:13:05.503 { 00:13:05.503 "name": "BaseBdev3", 00:13:05.503 "uuid": "33328dd6-2582-5cff-afa9-9d6f6b68e0a5", 00:13:05.503 "is_configured": true, 00:13:05.503 "data_offset": 2048, 00:13:05.503 "data_size": 63488 00:13:05.503 }, 00:13:05.503 { 00:13:05.503 "name": "BaseBdev4", 00:13:05.503 "uuid": "ea092fbb-b815-5de5-810d-c34b87f29e0d", 00:13:05.503 "is_configured": true, 00:13:05.503 "data_offset": 2048, 00:13:05.503 "data_size": 63488 00:13:05.503 } 00:13:05.503 ] 00:13:05.503 }' 00:13:05.503 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.503 12:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.763 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:05.763 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.763 12:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:05.763 12:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:05.763 12:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.763 12:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.763 12:33:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.763 12:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.763 12:33:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.024 12:33:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.024 12:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.024 "name": "raid_bdev1", 00:13:06.024 "uuid": "40cd8f3a-fe1e-4d04-8433-44b5f581e9fa", 00:13:06.024 "strip_size_kb": 0, 00:13:06.024 "state": "online", 00:13:06.024 "raid_level": "raid1", 00:13:06.024 "superblock": true, 00:13:06.024 "num_base_bdevs": 4, 00:13:06.024 "num_base_bdevs_discovered": 3, 00:13:06.024 "num_base_bdevs_operational": 3, 00:13:06.024 "base_bdevs_list": [ 00:13:06.024 { 00:13:06.024 "name": "spare", 00:13:06.024 "uuid": "1caa4a07-0b15-5b67-b75a-7d97a10b79e0", 00:13:06.024 "is_configured": true, 00:13:06.024 "data_offset": 2048, 00:13:06.024 "data_size": 63488 00:13:06.024 }, 00:13:06.024 { 00:13:06.024 "name": null, 00:13:06.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.024 "is_configured": false, 00:13:06.024 "data_offset": 2048, 00:13:06.024 "data_size": 63488 00:13:06.024 }, 00:13:06.024 { 00:13:06.024 "name": "BaseBdev3", 00:13:06.024 "uuid": "33328dd6-2582-5cff-afa9-9d6f6b68e0a5", 00:13:06.024 "is_configured": true, 00:13:06.024 "data_offset": 2048, 00:13:06.024 "data_size": 63488 00:13:06.024 }, 00:13:06.024 { 00:13:06.024 "name": "BaseBdev4", 00:13:06.024 "uuid": "ea092fbb-b815-5de5-810d-c34b87f29e0d", 00:13:06.024 "is_configured": true, 00:13:06.024 "data_offset": 2048, 00:13:06.024 "data_size": 63488 00:13:06.024 } 00:13:06.024 ] 00:13:06.024 }' 00:13:06.024 12:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.024 12:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:06.024 12:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.024 12:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:06.024 12:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.024 12:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:06.024 12:33:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.024 12:33:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.024 12:33:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.024 12:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.024 12:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:06.024 12:33:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.024 12:33:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.024 [2024-11-19 12:33:11.190742] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:06.024 12:33:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.024 12:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:06.024 12:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.024 12:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.024 12:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.024 12:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.024 12:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:06.025 12:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.025 12:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.025 12:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.025 12:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.025 12:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.025 12:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.025 12:33:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.025 12:33:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.025 12:33:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.025 12:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.025 "name": "raid_bdev1", 00:13:06.025 "uuid": "40cd8f3a-fe1e-4d04-8433-44b5f581e9fa", 00:13:06.025 "strip_size_kb": 0, 00:13:06.025 "state": "online", 00:13:06.025 "raid_level": "raid1", 00:13:06.025 "superblock": true, 00:13:06.025 "num_base_bdevs": 4, 00:13:06.025 "num_base_bdevs_discovered": 2, 00:13:06.025 "num_base_bdevs_operational": 2, 00:13:06.025 "base_bdevs_list": [ 00:13:06.025 { 00:13:06.025 "name": null, 00:13:06.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.025 "is_configured": false, 00:13:06.025 "data_offset": 0, 00:13:06.025 "data_size": 63488 00:13:06.025 }, 00:13:06.025 { 00:13:06.025 "name": null, 00:13:06.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.025 "is_configured": false, 00:13:06.025 "data_offset": 2048, 00:13:06.025 "data_size": 63488 00:13:06.025 }, 00:13:06.025 { 00:13:06.025 "name": "BaseBdev3", 00:13:06.025 "uuid": "33328dd6-2582-5cff-afa9-9d6f6b68e0a5", 00:13:06.025 "is_configured": true, 00:13:06.025 "data_offset": 2048, 00:13:06.025 "data_size": 63488 00:13:06.025 }, 00:13:06.025 { 00:13:06.025 "name": "BaseBdev4", 00:13:06.025 "uuid": "ea092fbb-b815-5de5-810d-c34b87f29e0d", 00:13:06.025 "is_configured": true, 00:13:06.025 "data_offset": 2048, 00:13:06.025 "data_size": 63488 00:13:06.025 } 00:13:06.025 ] 00:13:06.025 }' 00:13:06.025 12:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.025 12:33:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.595 12:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:06.595 12:33:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.595 12:33:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.595 [2024-11-19 12:33:11.626009] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:06.595 [2024-11-19 12:33:11.626285] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:06.595 [2024-11-19 12:33:11.626353] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:06.595 [2024-11-19 12:33:11.626471] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:06.595 [2024-11-19 12:33:11.629838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:13:06.595 [2024-11-19 12:33:11.631912] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:06.595 12:33:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.595 12:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:07.536 12:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:07.536 12:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.536 12:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:07.536 12:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:07.536 12:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.536 12:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.536 12:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.536 12:33:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.536 12:33:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.536 12:33:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.536 12:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.536 "name": "raid_bdev1", 00:13:07.536 "uuid": "40cd8f3a-fe1e-4d04-8433-44b5f581e9fa", 00:13:07.536 "strip_size_kb": 0, 00:13:07.536 "state": "online", 00:13:07.536 "raid_level": "raid1", 00:13:07.536 "superblock": true, 00:13:07.536 "num_base_bdevs": 4, 00:13:07.536 "num_base_bdevs_discovered": 3, 00:13:07.536 "num_base_bdevs_operational": 3, 00:13:07.536 "process": { 00:13:07.536 "type": "rebuild", 00:13:07.536 "target": "spare", 00:13:07.536 "progress": { 00:13:07.536 "blocks": 20480, 00:13:07.536 "percent": 32 00:13:07.536 } 00:13:07.536 }, 00:13:07.536 "base_bdevs_list": [ 00:13:07.536 { 00:13:07.536 "name": "spare", 00:13:07.536 "uuid": "1caa4a07-0b15-5b67-b75a-7d97a10b79e0", 00:13:07.536 "is_configured": true, 00:13:07.536 "data_offset": 2048, 00:13:07.536 "data_size": 63488 00:13:07.536 }, 00:13:07.536 { 00:13:07.536 "name": null, 00:13:07.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.536 "is_configured": false, 00:13:07.536 "data_offset": 2048, 00:13:07.536 "data_size": 63488 00:13:07.536 }, 00:13:07.536 { 00:13:07.536 "name": "BaseBdev3", 00:13:07.536 "uuid": "33328dd6-2582-5cff-afa9-9d6f6b68e0a5", 00:13:07.536 "is_configured": true, 00:13:07.536 "data_offset": 2048, 00:13:07.536 "data_size": 63488 00:13:07.536 }, 00:13:07.536 { 00:13:07.536 "name": "BaseBdev4", 00:13:07.536 "uuid": "ea092fbb-b815-5de5-810d-c34b87f29e0d", 00:13:07.536 "is_configured": true, 00:13:07.536 "data_offset": 2048, 00:13:07.536 "data_size": 63488 00:13:07.536 } 00:13:07.536 ] 00:13:07.536 }' 00:13:07.536 12:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.536 12:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:07.536 12:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.536 12:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:07.536 12:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:07.536 12:33:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.536 12:33:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.797 [2024-11-19 12:33:12.794939] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:07.797 [2024-11-19 12:33:12.836890] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:07.797 [2024-11-19 12:33:12.837026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.797 [2024-11-19 12:33:12.837064] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:07.797 [2024-11-19 12:33:12.837089] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:07.797 12:33:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.797 12:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:07.797 12:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.797 12:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.797 12:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:07.797 12:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:07.797 12:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:07.797 12:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.797 12:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.797 12:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.797 12:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.797 12:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.797 12:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.797 12:33:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.797 12:33:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.797 12:33:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.797 12:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.797 "name": "raid_bdev1", 00:13:07.797 "uuid": "40cd8f3a-fe1e-4d04-8433-44b5f581e9fa", 00:13:07.797 "strip_size_kb": 0, 00:13:07.797 "state": "online", 00:13:07.797 "raid_level": "raid1", 00:13:07.797 "superblock": true, 00:13:07.797 "num_base_bdevs": 4, 00:13:07.797 "num_base_bdevs_discovered": 2, 00:13:07.797 "num_base_bdevs_operational": 2, 00:13:07.797 "base_bdevs_list": [ 00:13:07.797 { 00:13:07.797 "name": null, 00:13:07.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.797 "is_configured": false, 00:13:07.797 "data_offset": 0, 00:13:07.797 "data_size": 63488 00:13:07.797 }, 00:13:07.797 { 00:13:07.797 "name": null, 00:13:07.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.797 "is_configured": false, 00:13:07.797 "data_offset": 2048, 00:13:07.797 "data_size": 63488 00:13:07.797 }, 00:13:07.797 { 00:13:07.797 "name": "BaseBdev3", 00:13:07.797 "uuid": "33328dd6-2582-5cff-afa9-9d6f6b68e0a5", 00:13:07.797 "is_configured": true, 00:13:07.797 "data_offset": 2048, 00:13:07.797 "data_size": 63488 00:13:07.797 }, 00:13:07.797 { 00:13:07.797 "name": "BaseBdev4", 00:13:07.797 "uuid": "ea092fbb-b815-5de5-810d-c34b87f29e0d", 00:13:07.797 "is_configured": true, 00:13:07.797 "data_offset": 2048, 00:13:07.797 "data_size": 63488 00:13:07.797 } 00:13:07.797 ] 00:13:07.797 }' 00:13:07.797 12:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.797 12:33:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.057 12:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:08.057 12:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.057 12:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.317 [2024-11-19 12:33:13.316250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:08.317 [2024-11-19 12:33:13.316401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.317 [2024-11-19 12:33:13.316436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:13:08.317 [2024-11-19 12:33:13.316448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.317 [2024-11-19 12:33:13.316941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.317 [2024-11-19 12:33:13.316965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:08.317 [2024-11-19 12:33:13.317054] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:08.317 [2024-11-19 12:33:13.317075] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:08.317 [2024-11-19 12:33:13.317085] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:08.317 [2024-11-19 12:33:13.317111] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:08.317 [2024-11-19 12:33:13.320485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:08.317 spare 00:13:08.317 12:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.317 12:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:08.317 [2024-11-19 12:33:13.322451] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:09.258 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:09.258 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.258 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:09.258 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:09.258 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.258 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.258 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.258 12:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.258 12:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.258 12:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.258 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.258 "name": "raid_bdev1", 00:13:09.258 "uuid": "40cd8f3a-fe1e-4d04-8433-44b5f581e9fa", 00:13:09.258 "strip_size_kb": 0, 00:13:09.258 "state": "online", 00:13:09.258 "raid_level": "raid1", 00:13:09.258 "superblock": true, 00:13:09.258 "num_base_bdevs": 4, 00:13:09.258 "num_base_bdevs_discovered": 3, 00:13:09.258 "num_base_bdevs_operational": 3, 00:13:09.258 "process": { 00:13:09.258 "type": "rebuild", 00:13:09.258 "target": "spare", 00:13:09.258 "progress": { 00:13:09.258 "blocks": 20480, 00:13:09.258 "percent": 32 00:13:09.258 } 00:13:09.258 }, 00:13:09.258 "base_bdevs_list": [ 00:13:09.258 { 00:13:09.258 "name": "spare", 00:13:09.258 "uuid": "1caa4a07-0b15-5b67-b75a-7d97a10b79e0", 00:13:09.258 "is_configured": true, 00:13:09.258 "data_offset": 2048, 00:13:09.258 "data_size": 63488 00:13:09.258 }, 00:13:09.258 { 00:13:09.258 "name": null, 00:13:09.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.258 "is_configured": false, 00:13:09.258 "data_offset": 2048, 00:13:09.258 "data_size": 63488 00:13:09.258 }, 00:13:09.258 { 00:13:09.258 "name": "BaseBdev3", 00:13:09.258 "uuid": "33328dd6-2582-5cff-afa9-9d6f6b68e0a5", 00:13:09.258 "is_configured": true, 00:13:09.258 "data_offset": 2048, 00:13:09.258 "data_size": 63488 00:13:09.258 }, 00:13:09.258 { 00:13:09.258 "name": "BaseBdev4", 00:13:09.258 "uuid": "ea092fbb-b815-5de5-810d-c34b87f29e0d", 00:13:09.258 "is_configured": true, 00:13:09.258 "data_offset": 2048, 00:13:09.258 "data_size": 63488 00:13:09.258 } 00:13:09.258 ] 00:13:09.258 }' 00:13:09.258 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.258 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:09.258 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.258 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:09.258 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:09.258 12:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.258 12:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.259 [2024-11-19 12:33:14.487329] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:09.518 [2024-11-19 12:33:14.527213] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:09.518 [2024-11-19 12:33:14.527271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.518 [2024-11-19 12:33:14.527289] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:09.518 [2024-11-19 12:33:14.527296] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:09.519 12:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.519 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:09.519 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.519 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.519 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.519 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.519 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:09.519 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.519 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.519 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.519 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.519 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.519 12:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.519 12:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.519 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.519 12:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.519 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.519 "name": "raid_bdev1", 00:13:09.519 "uuid": "40cd8f3a-fe1e-4d04-8433-44b5f581e9fa", 00:13:09.519 "strip_size_kb": 0, 00:13:09.519 "state": "online", 00:13:09.519 "raid_level": "raid1", 00:13:09.519 "superblock": true, 00:13:09.519 "num_base_bdevs": 4, 00:13:09.519 "num_base_bdevs_discovered": 2, 00:13:09.519 "num_base_bdevs_operational": 2, 00:13:09.519 "base_bdevs_list": [ 00:13:09.519 { 00:13:09.519 "name": null, 00:13:09.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.519 "is_configured": false, 00:13:09.519 "data_offset": 0, 00:13:09.519 "data_size": 63488 00:13:09.519 }, 00:13:09.519 { 00:13:09.519 "name": null, 00:13:09.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.519 "is_configured": false, 00:13:09.519 "data_offset": 2048, 00:13:09.519 "data_size": 63488 00:13:09.519 }, 00:13:09.519 { 00:13:09.519 "name": "BaseBdev3", 00:13:09.519 "uuid": "33328dd6-2582-5cff-afa9-9d6f6b68e0a5", 00:13:09.519 "is_configured": true, 00:13:09.519 "data_offset": 2048, 00:13:09.519 "data_size": 63488 00:13:09.519 }, 00:13:09.519 { 00:13:09.519 "name": "BaseBdev4", 00:13:09.519 "uuid": "ea092fbb-b815-5de5-810d-c34b87f29e0d", 00:13:09.519 "is_configured": true, 00:13:09.519 "data_offset": 2048, 00:13:09.519 "data_size": 63488 00:13:09.519 } 00:13:09.519 ] 00:13:09.519 }' 00:13:09.519 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.519 12:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.779 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:09.779 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.779 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:09.779 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:09.779 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.779 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.779 12:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.779 12:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.779 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.779 12:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.779 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.779 "name": "raid_bdev1", 00:13:09.779 "uuid": "40cd8f3a-fe1e-4d04-8433-44b5f581e9fa", 00:13:09.779 "strip_size_kb": 0, 00:13:09.779 "state": "online", 00:13:09.779 "raid_level": "raid1", 00:13:09.779 "superblock": true, 00:13:09.779 "num_base_bdevs": 4, 00:13:09.779 "num_base_bdevs_discovered": 2, 00:13:09.779 "num_base_bdevs_operational": 2, 00:13:09.779 "base_bdevs_list": [ 00:13:09.779 { 00:13:09.779 "name": null, 00:13:09.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.779 "is_configured": false, 00:13:09.779 "data_offset": 0, 00:13:09.779 "data_size": 63488 00:13:09.779 }, 00:13:09.779 { 00:13:09.779 "name": null, 00:13:09.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.779 "is_configured": false, 00:13:09.779 "data_offset": 2048, 00:13:09.779 "data_size": 63488 00:13:09.779 }, 00:13:09.779 { 00:13:09.779 "name": "BaseBdev3", 00:13:09.779 "uuid": "33328dd6-2582-5cff-afa9-9d6f6b68e0a5", 00:13:09.779 "is_configured": true, 00:13:09.779 "data_offset": 2048, 00:13:09.779 "data_size": 63488 00:13:09.779 }, 00:13:09.779 { 00:13:09.779 "name": "BaseBdev4", 00:13:09.779 "uuid": "ea092fbb-b815-5de5-810d-c34b87f29e0d", 00:13:09.779 "is_configured": true, 00:13:09.779 "data_offset": 2048, 00:13:09.779 "data_size": 63488 00:13:09.779 } 00:13:09.779 ] 00:13:09.779 }' 00:13:09.779 12:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.780 12:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:09.780 12:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.040 12:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:10.040 12:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:10.040 12:33:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.040 12:33:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.040 12:33:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.040 12:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:10.040 12:33:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.040 12:33:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.040 [2024-11-19 12:33:15.098536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:10.040 [2024-11-19 12:33:15.098595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.040 [2024-11-19 12:33:15.098620] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:13:10.040 [2024-11-19 12:33:15.098628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.040 [2024-11-19 12:33:15.099101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.040 [2024-11-19 12:33:15.099126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:10.040 [2024-11-19 12:33:15.099202] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:10.040 [2024-11-19 12:33:15.099217] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:10.040 [2024-11-19 12:33:15.099226] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:10.040 [2024-11-19 12:33:15.099235] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:10.040 BaseBdev1 00:13:10.040 12:33:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.040 12:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:10.981 12:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:10.981 12:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.981 12:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.981 12:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.981 12:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.981 12:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:10.981 12:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.981 12:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.981 12:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.981 12:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.981 12:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.981 12:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.981 12:33:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.981 12:33:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.981 12:33:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.981 12:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.981 "name": "raid_bdev1", 00:13:10.981 "uuid": "40cd8f3a-fe1e-4d04-8433-44b5f581e9fa", 00:13:10.981 "strip_size_kb": 0, 00:13:10.981 "state": "online", 00:13:10.981 "raid_level": "raid1", 00:13:10.981 "superblock": true, 00:13:10.981 "num_base_bdevs": 4, 00:13:10.981 "num_base_bdevs_discovered": 2, 00:13:10.981 "num_base_bdevs_operational": 2, 00:13:10.981 "base_bdevs_list": [ 00:13:10.981 { 00:13:10.981 "name": null, 00:13:10.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.981 "is_configured": false, 00:13:10.981 "data_offset": 0, 00:13:10.981 "data_size": 63488 00:13:10.981 }, 00:13:10.981 { 00:13:10.981 "name": null, 00:13:10.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.982 "is_configured": false, 00:13:10.982 "data_offset": 2048, 00:13:10.982 "data_size": 63488 00:13:10.982 }, 00:13:10.982 { 00:13:10.982 "name": "BaseBdev3", 00:13:10.982 "uuid": "33328dd6-2582-5cff-afa9-9d6f6b68e0a5", 00:13:10.982 "is_configured": true, 00:13:10.982 "data_offset": 2048, 00:13:10.982 "data_size": 63488 00:13:10.982 }, 00:13:10.982 { 00:13:10.982 "name": "BaseBdev4", 00:13:10.982 "uuid": "ea092fbb-b815-5de5-810d-c34b87f29e0d", 00:13:10.982 "is_configured": true, 00:13:10.982 "data_offset": 2048, 00:13:10.982 "data_size": 63488 00:13:10.982 } 00:13:10.982 ] 00:13:10.982 }' 00:13:10.982 12:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.982 12:33:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.552 12:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:11.552 12:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.552 12:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:11.552 12:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:11.552 12:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.552 12:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.552 12:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.552 12:33:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.552 12:33:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.552 12:33:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.552 12:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.552 "name": "raid_bdev1", 00:13:11.552 "uuid": "40cd8f3a-fe1e-4d04-8433-44b5f581e9fa", 00:13:11.552 "strip_size_kb": 0, 00:13:11.552 "state": "online", 00:13:11.552 "raid_level": "raid1", 00:13:11.552 "superblock": true, 00:13:11.552 "num_base_bdevs": 4, 00:13:11.552 "num_base_bdevs_discovered": 2, 00:13:11.552 "num_base_bdevs_operational": 2, 00:13:11.552 "base_bdevs_list": [ 00:13:11.552 { 00:13:11.552 "name": null, 00:13:11.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.552 "is_configured": false, 00:13:11.552 "data_offset": 0, 00:13:11.552 "data_size": 63488 00:13:11.552 }, 00:13:11.552 { 00:13:11.552 "name": null, 00:13:11.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.552 "is_configured": false, 00:13:11.552 "data_offset": 2048, 00:13:11.552 "data_size": 63488 00:13:11.552 }, 00:13:11.552 { 00:13:11.552 "name": "BaseBdev3", 00:13:11.552 "uuid": "33328dd6-2582-5cff-afa9-9d6f6b68e0a5", 00:13:11.552 "is_configured": true, 00:13:11.552 "data_offset": 2048, 00:13:11.552 "data_size": 63488 00:13:11.552 }, 00:13:11.552 { 00:13:11.552 "name": "BaseBdev4", 00:13:11.552 "uuid": "ea092fbb-b815-5de5-810d-c34b87f29e0d", 00:13:11.552 "is_configured": true, 00:13:11.552 "data_offset": 2048, 00:13:11.552 "data_size": 63488 00:13:11.552 } 00:13:11.552 ] 00:13:11.552 }' 00:13:11.552 12:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.552 12:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:11.552 12:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.552 12:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:11.552 12:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:11.552 12:33:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:13:11.552 12:33:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:11.552 12:33:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:11.552 12:33:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:11.552 12:33:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:11.552 12:33:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:11.552 12:33:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:11.552 12:33:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.552 12:33:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.552 [2024-11-19 12:33:16.707891] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:11.552 [2024-11-19 12:33:16.708118] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:11.552 [2024-11-19 12:33:16.708138] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:11.552 request: 00:13:11.552 { 00:13:11.552 "base_bdev": "BaseBdev1", 00:13:11.552 "raid_bdev": "raid_bdev1", 00:13:11.552 "method": "bdev_raid_add_base_bdev", 00:13:11.552 "req_id": 1 00:13:11.552 } 00:13:11.552 Got JSON-RPC error response 00:13:11.552 response: 00:13:11.552 { 00:13:11.552 "code": -22, 00:13:11.552 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:11.552 } 00:13:11.553 12:33:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:11.553 12:33:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:13:11.553 12:33:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:11.553 12:33:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:11.553 12:33:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:11.553 12:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:12.492 12:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:12.492 12:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.492 12:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.492 12:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.492 12:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.492 12:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:12.492 12:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.492 12:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.492 12:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.492 12:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.492 12:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.492 12:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.492 12:33:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.492 12:33:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.752 12:33:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.752 12:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.752 "name": "raid_bdev1", 00:13:12.752 "uuid": "40cd8f3a-fe1e-4d04-8433-44b5f581e9fa", 00:13:12.752 "strip_size_kb": 0, 00:13:12.752 "state": "online", 00:13:12.752 "raid_level": "raid1", 00:13:12.752 "superblock": true, 00:13:12.752 "num_base_bdevs": 4, 00:13:12.752 "num_base_bdevs_discovered": 2, 00:13:12.752 "num_base_bdevs_operational": 2, 00:13:12.752 "base_bdevs_list": [ 00:13:12.752 { 00:13:12.752 "name": null, 00:13:12.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.752 "is_configured": false, 00:13:12.752 "data_offset": 0, 00:13:12.752 "data_size": 63488 00:13:12.752 }, 00:13:12.752 { 00:13:12.752 "name": null, 00:13:12.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.752 "is_configured": false, 00:13:12.752 "data_offset": 2048, 00:13:12.752 "data_size": 63488 00:13:12.752 }, 00:13:12.752 { 00:13:12.752 "name": "BaseBdev3", 00:13:12.752 "uuid": "33328dd6-2582-5cff-afa9-9d6f6b68e0a5", 00:13:12.752 "is_configured": true, 00:13:12.752 "data_offset": 2048, 00:13:12.752 "data_size": 63488 00:13:12.752 }, 00:13:12.752 { 00:13:12.752 "name": "BaseBdev4", 00:13:12.752 "uuid": "ea092fbb-b815-5de5-810d-c34b87f29e0d", 00:13:12.752 "is_configured": true, 00:13:12.752 "data_offset": 2048, 00:13:12.752 "data_size": 63488 00:13:12.752 } 00:13:12.752 ] 00:13:12.752 }' 00:13:12.752 12:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.752 12:33:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.012 12:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:13.012 12:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.012 12:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:13.012 12:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:13.012 12:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.012 12:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.012 12:33:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.012 12:33:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.012 12:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.012 12:33:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.012 12:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.012 "name": "raid_bdev1", 00:13:13.012 "uuid": "40cd8f3a-fe1e-4d04-8433-44b5f581e9fa", 00:13:13.012 "strip_size_kb": 0, 00:13:13.012 "state": "online", 00:13:13.012 "raid_level": "raid1", 00:13:13.012 "superblock": true, 00:13:13.012 "num_base_bdevs": 4, 00:13:13.012 "num_base_bdevs_discovered": 2, 00:13:13.012 "num_base_bdevs_operational": 2, 00:13:13.012 "base_bdevs_list": [ 00:13:13.012 { 00:13:13.012 "name": null, 00:13:13.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.013 "is_configured": false, 00:13:13.013 "data_offset": 0, 00:13:13.013 "data_size": 63488 00:13:13.013 }, 00:13:13.013 { 00:13:13.013 "name": null, 00:13:13.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.013 "is_configured": false, 00:13:13.013 "data_offset": 2048, 00:13:13.013 "data_size": 63488 00:13:13.013 }, 00:13:13.013 { 00:13:13.013 "name": "BaseBdev3", 00:13:13.013 "uuid": "33328dd6-2582-5cff-afa9-9d6f6b68e0a5", 00:13:13.013 "is_configured": true, 00:13:13.013 "data_offset": 2048, 00:13:13.013 "data_size": 63488 00:13:13.013 }, 00:13:13.013 { 00:13:13.013 "name": "BaseBdev4", 00:13:13.013 "uuid": "ea092fbb-b815-5de5-810d-c34b87f29e0d", 00:13:13.013 "is_configured": true, 00:13:13.013 "data_offset": 2048, 00:13:13.013 "data_size": 63488 00:13:13.013 } 00:13:13.013 ] 00:13:13.013 }' 00:13:13.013 12:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.013 12:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:13.013 12:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.273 12:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:13.273 12:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 88820 00:13:13.273 12:33:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 88820 ']' 00:13:13.273 12:33:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 88820 00:13:13.273 12:33:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:13.273 12:33:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:13.273 12:33:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88820 00:13:13.273 12:33:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:13.273 killing process with pid 88820 00:13:13.273 Received shutdown signal, test time was about 60.000000 seconds 00:13:13.273 00:13:13.273 Latency(us) 00:13:13.273 [2024-11-19T12:33:18.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:13.273 [2024-11-19T12:33:18.534Z] =================================================================================================================== 00:13:13.273 [2024-11-19T12:33:18.534Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:13.273 12:33:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:13.273 12:33:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88820' 00:13:13.273 12:33:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 88820 00:13:13.273 [2024-11-19 12:33:18.333236] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:13.273 [2024-11-19 12:33:18.333371] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:13.273 12:33:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 88820 00:13:13.273 [2024-11-19 12:33:18.333438] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:13.273 [2024-11-19 12:33:18.333450] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:13:13.273 [2024-11-19 12:33:18.385015] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:13.532 12:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:13.532 ************************************ 00:13:13.532 END TEST raid_rebuild_test_sb 00:13:13.532 ************************************ 00:13:13.532 00:13:13.532 real 0m23.507s 00:13:13.532 user 0m28.156s 00:13:13.532 sys 0m3.888s 00:13:13.532 12:33:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:13.532 12:33:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.532 12:33:18 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:13:13.532 12:33:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:13.532 12:33:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:13.532 12:33:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:13.532 ************************************ 00:13:13.532 START TEST raid_rebuild_test_io 00:13:13.532 ************************************ 00:13:13.532 12:33:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:13:13.532 12:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:13.532 12:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:13.532 12:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:13.532 12:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:13.532 12:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:13.532 12:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:13.532 12:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:13.532 12:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:13.532 12:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:13.533 12:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:13.533 12:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:13.533 12:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:13.533 12:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:13.533 12:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:13.533 12:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:13.533 12:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:13.533 12:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:13.533 12:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:13.533 12:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:13.533 12:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:13.533 12:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:13.533 12:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:13.533 12:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:13.533 12:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:13.533 12:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:13.533 12:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:13.533 12:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:13.533 12:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:13.533 12:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:13.533 12:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89560 00:13:13.533 12:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89560 00:13:13.533 12:33:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 89560 ']' 00:13:13.533 12:33:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.533 12:33:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:13.533 12:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:13.533 12:33:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.533 12:33:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:13.533 12:33:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.533 [2024-11-19 12:33:18.785623] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:13.533 [2024-11-19 12:33:18.785845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:13.533 Zero copy mechanism will not be used. 00:13:13.533 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89560 ] 00:13:13.793 [2024-11-19 12:33:18.931367] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.793 [2024-11-19 12:33:18.976739] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.794 [2024-11-19 12:33:19.018806] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:13.794 [2024-11-19 12:33:19.018919] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:14.370 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:14.370 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:13:14.370 12:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:14.370 12:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:14.370 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.370 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.370 BaseBdev1_malloc 00:13:14.370 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.370 12:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:14.370 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.370 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.631 [2024-11-19 12:33:19.633270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:14.631 [2024-11-19 12:33:19.633389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.631 [2024-11-19 12:33:19.633440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:14.631 [2024-11-19 12:33:19.633477] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.631 [2024-11-19 12:33:19.635733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.631 [2024-11-19 12:33:19.635829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:14.631 BaseBdev1 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.631 BaseBdev2_malloc 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.631 [2024-11-19 12:33:19.673371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:14.631 [2024-11-19 12:33:19.673437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.631 [2024-11-19 12:33:19.673464] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:14.631 [2024-11-19 12:33:19.673476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.631 [2024-11-19 12:33:19.676209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.631 [2024-11-19 12:33:19.676301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:14.631 BaseBdev2 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.631 BaseBdev3_malloc 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.631 [2024-11-19 12:33:19.706445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:14.631 [2024-11-19 12:33:19.706537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.631 [2024-11-19 12:33:19.706582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:14.631 [2024-11-19 12:33:19.706612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.631 [2024-11-19 12:33:19.708905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.631 [2024-11-19 12:33:19.708972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:14.631 BaseBdev3 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.631 BaseBdev4_malloc 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.631 [2024-11-19 12:33:19.735091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:14.631 [2024-11-19 12:33:19.735148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.631 [2024-11-19 12:33:19.735173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:14.631 [2024-11-19 12:33:19.735183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.631 [2024-11-19 12:33:19.737205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.631 [2024-11-19 12:33:19.737239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:14.631 BaseBdev4 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.631 spare_malloc 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.631 spare_delay 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.631 [2024-11-19 12:33:19.775535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:14.631 [2024-11-19 12:33:19.775587] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.631 [2024-11-19 12:33:19.775608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:14.631 [2024-11-19 12:33:19.775617] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.631 [2024-11-19 12:33:19.777642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.631 [2024-11-19 12:33:19.777677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:14.631 spare 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.631 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.631 [2024-11-19 12:33:19.787588] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:14.631 [2024-11-19 12:33:19.789322] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:14.631 [2024-11-19 12:33:19.789388] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:14.631 [2024-11-19 12:33:19.789428] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:14.631 [2024-11-19 12:33:19.789501] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:14.631 [2024-11-19 12:33:19.789511] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:14.631 [2024-11-19 12:33:19.789735] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:14.631 [2024-11-19 12:33:19.789887] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:14.632 [2024-11-19 12:33:19.789913] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:14.632 [2024-11-19 12:33:19.790026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.632 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.632 12:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:14.632 12:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.632 12:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.632 12:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.632 12:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.632 12:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:14.632 12:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.632 12:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.632 12:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.632 12:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.632 12:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.632 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.632 12:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.632 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.632 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.632 12:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.632 "name": "raid_bdev1", 00:13:14.632 "uuid": "b22befe0-37be-46b4-9d95-aac9091b409e", 00:13:14.632 "strip_size_kb": 0, 00:13:14.632 "state": "online", 00:13:14.632 "raid_level": "raid1", 00:13:14.632 "superblock": false, 00:13:14.632 "num_base_bdevs": 4, 00:13:14.632 "num_base_bdevs_discovered": 4, 00:13:14.632 "num_base_bdevs_operational": 4, 00:13:14.632 "base_bdevs_list": [ 00:13:14.632 { 00:13:14.632 "name": "BaseBdev1", 00:13:14.632 "uuid": "1914570c-a223-5e39-9d7e-e6765e3b8d14", 00:13:14.632 "is_configured": true, 00:13:14.632 "data_offset": 0, 00:13:14.632 "data_size": 65536 00:13:14.632 }, 00:13:14.632 { 00:13:14.632 "name": "BaseBdev2", 00:13:14.632 "uuid": "36e408fe-5d26-55fa-8694-afaeb6a721c7", 00:13:14.632 "is_configured": true, 00:13:14.632 "data_offset": 0, 00:13:14.632 "data_size": 65536 00:13:14.632 }, 00:13:14.632 { 00:13:14.632 "name": "BaseBdev3", 00:13:14.632 "uuid": "8aab4f52-de67-527e-896f-bea2dbec8bac", 00:13:14.632 "is_configured": true, 00:13:14.632 "data_offset": 0, 00:13:14.632 "data_size": 65536 00:13:14.632 }, 00:13:14.632 { 00:13:14.632 "name": "BaseBdev4", 00:13:14.632 "uuid": "f5a471f9-5229-594c-bdbd-55dec2869c15", 00:13:14.632 "is_configured": true, 00:13:14.632 "data_offset": 0, 00:13:14.632 "data_size": 65536 00:13:14.632 } 00:13:14.632 ] 00:13:14.632 }' 00:13:14.632 12:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.632 12:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:15.203 [2024-11-19 12:33:20.255143] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.203 [2024-11-19 12:33:20.354612] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.203 "name": "raid_bdev1", 00:13:15.203 "uuid": "b22befe0-37be-46b4-9d95-aac9091b409e", 00:13:15.203 "strip_size_kb": 0, 00:13:15.203 "state": "online", 00:13:15.203 "raid_level": "raid1", 00:13:15.203 "superblock": false, 00:13:15.203 "num_base_bdevs": 4, 00:13:15.203 "num_base_bdevs_discovered": 3, 00:13:15.203 "num_base_bdevs_operational": 3, 00:13:15.203 "base_bdevs_list": [ 00:13:15.203 { 00:13:15.203 "name": null, 00:13:15.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.203 "is_configured": false, 00:13:15.203 "data_offset": 0, 00:13:15.203 "data_size": 65536 00:13:15.203 }, 00:13:15.203 { 00:13:15.203 "name": "BaseBdev2", 00:13:15.203 "uuid": "36e408fe-5d26-55fa-8694-afaeb6a721c7", 00:13:15.203 "is_configured": true, 00:13:15.203 "data_offset": 0, 00:13:15.203 "data_size": 65536 00:13:15.203 }, 00:13:15.203 { 00:13:15.203 "name": "BaseBdev3", 00:13:15.203 "uuid": "8aab4f52-de67-527e-896f-bea2dbec8bac", 00:13:15.203 "is_configured": true, 00:13:15.203 "data_offset": 0, 00:13:15.203 "data_size": 65536 00:13:15.203 }, 00:13:15.203 { 00:13:15.203 "name": "BaseBdev4", 00:13:15.203 "uuid": "f5a471f9-5229-594c-bdbd-55dec2869c15", 00:13:15.203 "is_configured": true, 00:13:15.203 "data_offset": 0, 00:13:15.203 "data_size": 65536 00:13:15.203 } 00:13:15.203 ] 00:13:15.203 }' 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.203 12:33:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.203 [2024-11-19 12:33:20.428473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:15.203 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:15.203 Zero copy mechanism will not be used. 00:13:15.203 Running I/O for 60 seconds... 00:13:15.773 12:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:15.773 12:33:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.773 12:33:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.773 [2024-11-19 12:33:20.796572] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:15.773 12:33:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.773 12:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:15.773 [2024-11-19 12:33:20.844312] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:15.773 [2024-11-19 12:33:20.846356] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:15.773 [2024-11-19 12:33:20.954860] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:15.773 [2024-11-19 12:33:20.956061] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:16.033 [2024-11-19 12:33:21.176576] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:16.294 [2024-11-19 12:33:21.421933] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:16.554 191.00 IOPS, 573.00 MiB/s [2024-11-19T12:33:21.815Z] [2024-11-19 12:33:21.784807] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:16.814 12:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:16.814 12:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.814 12:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:16.814 12:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:16.814 12:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.814 12:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.814 12:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.814 12:33:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.814 12:33:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.814 12:33:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.814 12:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.814 "name": "raid_bdev1", 00:13:16.814 "uuid": "b22befe0-37be-46b4-9d95-aac9091b409e", 00:13:16.814 "strip_size_kb": 0, 00:13:16.814 "state": "online", 00:13:16.814 "raid_level": "raid1", 00:13:16.814 "superblock": false, 00:13:16.814 "num_base_bdevs": 4, 00:13:16.814 "num_base_bdevs_discovered": 4, 00:13:16.814 "num_base_bdevs_operational": 4, 00:13:16.814 "process": { 00:13:16.814 "type": "rebuild", 00:13:16.814 "target": "spare", 00:13:16.814 "progress": { 00:13:16.814 "blocks": 14336, 00:13:16.814 "percent": 21 00:13:16.814 } 00:13:16.814 }, 00:13:16.814 "base_bdevs_list": [ 00:13:16.814 { 00:13:16.814 "name": "spare", 00:13:16.814 "uuid": "6bd2fb88-9018-5546-b2bf-2956e38f90f6", 00:13:16.814 "is_configured": true, 00:13:16.814 "data_offset": 0, 00:13:16.814 "data_size": 65536 00:13:16.814 }, 00:13:16.814 { 00:13:16.814 "name": "BaseBdev2", 00:13:16.814 "uuid": "36e408fe-5d26-55fa-8694-afaeb6a721c7", 00:13:16.815 "is_configured": true, 00:13:16.815 "data_offset": 0, 00:13:16.815 "data_size": 65536 00:13:16.815 }, 00:13:16.815 { 00:13:16.815 "name": "BaseBdev3", 00:13:16.815 "uuid": "8aab4f52-de67-527e-896f-bea2dbec8bac", 00:13:16.815 "is_configured": true, 00:13:16.815 "data_offset": 0, 00:13:16.815 "data_size": 65536 00:13:16.815 }, 00:13:16.815 { 00:13:16.815 "name": "BaseBdev4", 00:13:16.815 "uuid": "f5a471f9-5229-594c-bdbd-55dec2869c15", 00:13:16.815 "is_configured": true, 00:13:16.815 "data_offset": 0, 00:13:16.815 "data_size": 65536 00:13:16.815 } 00:13:16.815 ] 00:13:16.815 }' 00:13:16.815 12:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.815 12:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:16.815 12:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.815 12:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:16.815 12:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:16.815 12:33:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.815 12:33:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.815 [2024-11-19 12:33:21.986975] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:16.815 [2024-11-19 12:33:21.987190] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:16.815 [2024-11-19 12:33:22.004172] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:17.076 [2024-11-19 12:33:22.233947] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:17.076 [2024-11-19 12:33:22.242124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.076 [2024-11-19 12:33:22.242212] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:17.076 [2024-11-19 12:33:22.242243] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:17.076 [2024-11-19 12:33:22.259345] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:17.076 12:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.076 12:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:17.076 12:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.076 12:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.076 12:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.076 12:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.076 12:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:17.076 12:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.076 12:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.076 12:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.076 12:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.076 12:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.076 12:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.076 12:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.076 12:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.076 12:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.076 12:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.076 "name": "raid_bdev1", 00:13:17.076 "uuid": "b22befe0-37be-46b4-9d95-aac9091b409e", 00:13:17.076 "strip_size_kb": 0, 00:13:17.076 "state": "online", 00:13:17.076 "raid_level": "raid1", 00:13:17.076 "superblock": false, 00:13:17.076 "num_base_bdevs": 4, 00:13:17.076 "num_base_bdevs_discovered": 3, 00:13:17.076 "num_base_bdevs_operational": 3, 00:13:17.076 "base_bdevs_list": [ 00:13:17.076 { 00:13:17.076 "name": null, 00:13:17.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.076 "is_configured": false, 00:13:17.076 "data_offset": 0, 00:13:17.076 "data_size": 65536 00:13:17.076 }, 00:13:17.076 { 00:13:17.076 "name": "BaseBdev2", 00:13:17.076 "uuid": "36e408fe-5d26-55fa-8694-afaeb6a721c7", 00:13:17.076 "is_configured": true, 00:13:17.076 "data_offset": 0, 00:13:17.076 "data_size": 65536 00:13:17.076 }, 00:13:17.076 { 00:13:17.076 "name": "BaseBdev3", 00:13:17.076 "uuid": "8aab4f52-de67-527e-896f-bea2dbec8bac", 00:13:17.076 "is_configured": true, 00:13:17.076 "data_offset": 0, 00:13:17.076 "data_size": 65536 00:13:17.076 }, 00:13:17.076 { 00:13:17.076 "name": "BaseBdev4", 00:13:17.076 "uuid": "f5a471f9-5229-594c-bdbd-55dec2869c15", 00:13:17.076 "is_configured": true, 00:13:17.076 "data_offset": 0, 00:13:17.076 "data_size": 65536 00:13:17.076 } 00:13:17.076 ] 00:13:17.076 }' 00:13:17.076 12:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.076 12:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.596 162.00 IOPS, 486.00 MiB/s [2024-11-19T12:33:22.857Z] 12:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:17.596 12:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.596 12:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:17.596 12:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:17.596 12:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.596 12:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.596 12:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.596 12:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.596 12:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.596 12:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.596 12:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.596 "name": "raid_bdev1", 00:13:17.596 "uuid": "b22befe0-37be-46b4-9d95-aac9091b409e", 00:13:17.596 "strip_size_kb": 0, 00:13:17.596 "state": "online", 00:13:17.596 "raid_level": "raid1", 00:13:17.596 "superblock": false, 00:13:17.596 "num_base_bdevs": 4, 00:13:17.596 "num_base_bdevs_discovered": 3, 00:13:17.596 "num_base_bdevs_operational": 3, 00:13:17.596 "base_bdevs_list": [ 00:13:17.596 { 00:13:17.596 "name": null, 00:13:17.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.596 "is_configured": false, 00:13:17.596 "data_offset": 0, 00:13:17.596 "data_size": 65536 00:13:17.596 }, 00:13:17.596 { 00:13:17.596 "name": "BaseBdev2", 00:13:17.596 "uuid": "36e408fe-5d26-55fa-8694-afaeb6a721c7", 00:13:17.596 "is_configured": true, 00:13:17.596 "data_offset": 0, 00:13:17.596 "data_size": 65536 00:13:17.596 }, 00:13:17.596 { 00:13:17.596 "name": "BaseBdev3", 00:13:17.596 "uuid": "8aab4f52-de67-527e-896f-bea2dbec8bac", 00:13:17.596 "is_configured": true, 00:13:17.596 "data_offset": 0, 00:13:17.596 "data_size": 65536 00:13:17.596 }, 00:13:17.596 { 00:13:17.596 "name": "BaseBdev4", 00:13:17.596 "uuid": "f5a471f9-5229-594c-bdbd-55dec2869c15", 00:13:17.596 "is_configured": true, 00:13:17.596 "data_offset": 0, 00:13:17.596 "data_size": 65536 00:13:17.597 } 00:13:17.597 ] 00:13:17.597 }' 00:13:17.597 12:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.597 12:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:17.597 12:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.597 12:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:17.597 12:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:17.597 12:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.597 12:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.597 [2024-11-19 12:33:22.795682] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:17.597 12:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.597 12:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:17.597 [2024-11-19 12:33:22.837894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:17.597 [2024-11-19 12:33:22.840029] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:17.856 [2024-11-19 12:33:22.954560] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:17.856 [2024-11-19 12:33:22.955899] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:18.116 [2024-11-19 12:33:23.158554] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:18.116 [2024-11-19 12:33:23.159393] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:18.375 172.33 IOPS, 517.00 MiB/s [2024-11-19T12:33:23.636Z] [2024-11-19 12:33:23.504335] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:18.375 [2024-11-19 12:33:23.505636] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:18.634 [2024-11-19 12:33:23.729347] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:18.634 [2024-11-19 12:33:23.730138] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:18.634 12:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:18.634 12:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.634 12:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:18.634 12:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:18.634 12:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.634 12:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.634 12:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.634 12:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.634 12:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.634 12:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.634 12:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.634 "name": "raid_bdev1", 00:13:18.634 "uuid": "b22befe0-37be-46b4-9d95-aac9091b409e", 00:13:18.634 "strip_size_kb": 0, 00:13:18.634 "state": "online", 00:13:18.634 "raid_level": "raid1", 00:13:18.634 "superblock": false, 00:13:18.634 "num_base_bdevs": 4, 00:13:18.634 "num_base_bdevs_discovered": 4, 00:13:18.634 "num_base_bdevs_operational": 4, 00:13:18.634 "process": { 00:13:18.634 "type": "rebuild", 00:13:18.634 "target": "spare", 00:13:18.634 "progress": { 00:13:18.634 "blocks": 10240, 00:13:18.634 "percent": 15 00:13:18.634 } 00:13:18.634 }, 00:13:18.634 "base_bdevs_list": [ 00:13:18.634 { 00:13:18.634 "name": "spare", 00:13:18.634 "uuid": "6bd2fb88-9018-5546-b2bf-2956e38f90f6", 00:13:18.634 "is_configured": true, 00:13:18.634 "data_offset": 0, 00:13:18.634 "data_size": 65536 00:13:18.634 }, 00:13:18.634 { 00:13:18.634 "name": "BaseBdev2", 00:13:18.634 "uuid": "36e408fe-5d26-55fa-8694-afaeb6a721c7", 00:13:18.634 "is_configured": true, 00:13:18.634 "data_offset": 0, 00:13:18.634 "data_size": 65536 00:13:18.634 }, 00:13:18.634 { 00:13:18.634 "name": "BaseBdev3", 00:13:18.634 "uuid": "8aab4f52-de67-527e-896f-bea2dbec8bac", 00:13:18.634 "is_configured": true, 00:13:18.634 "data_offset": 0, 00:13:18.634 "data_size": 65536 00:13:18.634 }, 00:13:18.634 { 00:13:18.634 "name": "BaseBdev4", 00:13:18.634 "uuid": "f5a471f9-5229-594c-bdbd-55dec2869c15", 00:13:18.634 "is_configured": true, 00:13:18.634 "data_offset": 0, 00:13:18.634 "data_size": 65536 00:13:18.634 } 00:13:18.634 ] 00:13:18.634 }' 00:13:18.634 12:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.894 12:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:18.894 12:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.894 12:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:18.894 12:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:18.894 12:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:18.894 12:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:18.894 12:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:18.894 12:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:18.894 12:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.894 12:33:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.894 [2024-11-19 12:33:23.986422] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:18.894 [2024-11-19 12:33:24.065588] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:18.894 [2024-11-19 12:33:24.066084] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:19.153 [2024-11-19 12:33:24.168086] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:13:19.153 [2024-11-19 12:33:24.168191] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:13:19.153 [2024-11-19 12:33:24.170775] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:19.153 12:33:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.153 12:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:19.153 12:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:19.153 12:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.153 12:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.153 12:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.153 12:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.153 12:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.154 12:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.154 12:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.154 12:33:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.154 12:33:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.154 12:33:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.154 12:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.154 "name": "raid_bdev1", 00:13:19.154 "uuid": "b22befe0-37be-46b4-9d95-aac9091b409e", 00:13:19.154 "strip_size_kb": 0, 00:13:19.154 "state": "online", 00:13:19.154 "raid_level": "raid1", 00:13:19.154 "superblock": false, 00:13:19.154 "num_base_bdevs": 4, 00:13:19.154 "num_base_bdevs_discovered": 3, 00:13:19.154 "num_base_bdevs_operational": 3, 00:13:19.154 "process": { 00:13:19.154 "type": "rebuild", 00:13:19.154 "target": "spare", 00:13:19.154 "progress": { 00:13:19.154 "blocks": 14336, 00:13:19.154 "percent": 21 00:13:19.154 } 00:13:19.154 }, 00:13:19.154 "base_bdevs_list": [ 00:13:19.154 { 00:13:19.154 "name": "spare", 00:13:19.154 "uuid": "6bd2fb88-9018-5546-b2bf-2956e38f90f6", 00:13:19.154 "is_configured": true, 00:13:19.154 "data_offset": 0, 00:13:19.154 "data_size": 65536 00:13:19.154 }, 00:13:19.154 { 00:13:19.154 "name": null, 00:13:19.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.154 "is_configured": false, 00:13:19.154 "data_offset": 0, 00:13:19.154 "data_size": 65536 00:13:19.154 }, 00:13:19.154 { 00:13:19.154 "name": "BaseBdev3", 00:13:19.154 "uuid": "8aab4f52-de67-527e-896f-bea2dbec8bac", 00:13:19.154 "is_configured": true, 00:13:19.154 "data_offset": 0, 00:13:19.154 "data_size": 65536 00:13:19.154 }, 00:13:19.154 { 00:13:19.154 "name": "BaseBdev4", 00:13:19.154 "uuid": "f5a471f9-5229-594c-bdbd-55dec2869c15", 00:13:19.154 "is_configured": true, 00:13:19.154 "data_offset": 0, 00:13:19.154 "data_size": 65536 00:13:19.154 } 00:13:19.154 ] 00:13:19.154 }' 00:13:19.154 12:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.154 12:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.154 12:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.154 [2024-11-19 12:33:24.300962] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:19.154 12:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.154 12:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=397 00:13:19.154 12:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:19.154 12:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.154 12:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.154 12:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.154 12:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.154 12:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.154 12:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.154 12:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.154 12:33:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.154 12:33:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.154 12:33:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.154 12:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.154 "name": "raid_bdev1", 00:13:19.154 "uuid": "b22befe0-37be-46b4-9d95-aac9091b409e", 00:13:19.154 "strip_size_kb": 0, 00:13:19.154 "state": "online", 00:13:19.154 "raid_level": "raid1", 00:13:19.154 "superblock": false, 00:13:19.154 "num_base_bdevs": 4, 00:13:19.154 "num_base_bdevs_discovered": 3, 00:13:19.154 "num_base_bdevs_operational": 3, 00:13:19.154 "process": { 00:13:19.154 "type": "rebuild", 00:13:19.154 "target": "spare", 00:13:19.154 "progress": { 00:13:19.154 "blocks": 16384, 00:13:19.154 "percent": 25 00:13:19.154 } 00:13:19.154 }, 00:13:19.154 "base_bdevs_list": [ 00:13:19.154 { 00:13:19.154 "name": "spare", 00:13:19.154 "uuid": "6bd2fb88-9018-5546-b2bf-2956e38f90f6", 00:13:19.154 "is_configured": true, 00:13:19.154 "data_offset": 0, 00:13:19.154 "data_size": 65536 00:13:19.154 }, 00:13:19.154 { 00:13:19.154 "name": null, 00:13:19.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.154 "is_configured": false, 00:13:19.154 "data_offset": 0, 00:13:19.154 "data_size": 65536 00:13:19.154 }, 00:13:19.154 { 00:13:19.154 "name": "BaseBdev3", 00:13:19.154 "uuid": "8aab4f52-de67-527e-896f-bea2dbec8bac", 00:13:19.154 "is_configured": true, 00:13:19.154 "data_offset": 0, 00:13:19.154 "data_size": 65536 00:13:19.154 }, 00:13:19.154 { 00:13:19.154 "name": "BaseBdev4", 00:13:19.154 "uuid": "f5a471f9-5229-594c-bdbd-55dec2869c15", 00:13:19.154 "is_configured": true, 00:13:19.154 "data_offset": 0, 00:13:19.154 "data_size": 65536 00:13:19.154 } 00:13:19.154 ] 00:13:19.154 }' 00:13:19.154 12:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.154 12:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.154 12:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.154 12:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.154 12:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:19.414 160.25 IOPS, 480.75 MiB/s [2024-11-19T12:33:24.675Z] [2024-11-19 12:33:24.617054] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:19.673 [2024-11-19 12:33:24.874258] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:20.243 [2024-11-19 12:33:25.335706] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:20.243 12:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:20.243 12:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:20.243 12:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.243 12:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:20.243 12:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:20.243 12:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.243 12:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.243 12:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.243 12:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.243 12:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.243 137.00 IOPS, 411.00 MiB/s [2024-11-19T12:33:25.504Z] 12:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.243 12:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.243 "name": "raid_bdev1", 00:13:20.243 "uuid": "b22befe0-37be-46b4-9d95-aac9091b409e", 00:13:20.243 "strip_size_kb": 0, 00:13:20.243 "state": "online", 00:13:20.243 "raid_level": "raid1", 00:13:20.243 "superblock": false, 00:13:20.243 "num_base_bdevs": 4, 00:13:20.243 "num_base_bdevs_discovered": 3, 00:13:20.243 "num_base_bdevs_operational": 3, 00:13:20.243 "process": { 00:13:20.243 "type": "rebuild", 00:13:20.243 "target": "spare", 00:13:20.243 "progress": { 00:13:20.243 "blocks": 34816, 00:13:20.243 "percent": 53 00:13:20.243 } 00:13:20.243 }, 00:13:20.243 "base_bdevs_list": [ 00:13:20.243 { 00:13:20.243 "name": "spare", 00:13:20.243 "uuid": "6bd2fb88-9018-5546-b2bf-2956e38f90f6", 00:13:20.243 "is_configured": true, 00:13:20.243 "data_offset": 0, 00:13:20.243 "data_size": 65536 00:13:20.243 }, 00:13:20.243 { 00:13:20.243 "name": null, 00:13:20.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.243 "is_configured": false, 00:13:20.243 "data_offset": 0, 00:13:20.243 "data_size": 65536 00:13:20.243 }, 00:13:20.243 { 00:13:20.243 "name": "BaseBdev3", 00:13:20.243 "uuid": "8aab4f52-de67-527e-896f-bea2dbec8bac", 00:13:20.243 "is_configured": true, 00:13:20.243 "data_offset": 0, 00:13:20.243 "data_size": 65536 00:13:20.243 }, 00:13:20.243 { 00:13:20.243 "name": "BaseBdev4", 00:13:20.243 "uuid": "f5a471f9-5229-594c-bdbd-55dec2869c15", 00:13:20.243 "is_configured": true, 00:13:20.243 "data_offset": 0, 00:13:20.243 "data_size": 65536 00:13:20.243 } 00:13:20.243 ] 00:13:20.243 }' 00:13:20.243 12:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.502 12:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:20.502 12:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.502 12:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:20.502 12:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:21.441 121.67 IOPS, 365.00 MiB/s [2024-11-19T12:33:26.702Z] 12:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:21.441 12:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.441 12:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.441 12:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.441 12:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.441 12:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.441 12:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.441 12:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.441 12:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.441 12:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.441 12:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.441 12:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.441 "name": "raid_bdev1", 00:13:21.441 "uuid": "b22befe0-37be-46b4-9d95-aac9091b409e", 00:13:21.441 "strip_size_kb": 0, 00:13:21.441 "state": "online", 00:13:21.441 "raid_level": "raid1", 00:13:21.441 "superblock": false, 00:13:21.441 "num_base_bdevs": 4, 00:13:21.441 "num_base_bdevs_discovered": 3, 00:13:21.441 "num_base_bdevs_operational": 3, 00:13:21.441 "process": { 00:13:21.441 "type": "rebuild", 00:13:21.441 "target": "spare", 00:13:21.441 "progress": { 00:13:21.441 "blocks": 55296, 00:13:21.441 "percent": 84 00:13:21.441 } 00:13:21.441 }, 00:13:21.441 "base_bdevs_list": [ 00:13:21.441 { 00:13:21.441 "name": "spare", 00:13:21.441 "uuid": "6bd2fb88-9018-5546-b2bf-2956e38f90f6", 00:13:21.441 "is_configured": true, 00:13:21.441 "data_offset": 0, 00:13:21.441 "data_size": 65536 00:13:21.441 }, 00:13:21.441 { 00:13:21.441 "name": null, 00:13:21.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.441 "is_configured": false, 00:13:21.441 "data_offset": 0, 00:13:21.441 "data_size": 65536 00:13:21.441 }, 00:13:21.441 { 00:13:21.441 "name": "BaseBdev3", 00:13:21.441 "uuid": "8aab4f52-de67-527e-896f-bea2dbec8bac", 00:13:21.441 "is_configured": true, 00:13:21.441 "data_offset": 0, 00:13:21.441 "data_size": 65536 00:13:21.441 }, 00:13:21.441 { 00:13:21.441 "name": "BaseBdev4", 00:13:21.441 "uuid": "f5a471f9-5229-594c-bdbd-55dec2869c15", 00:13:21.441 "is_configured": true, 00:13:21.441 "data_offset": 0, 00:13:21.441 "data_size": 65536 00:13:21.441 } 00:13:21.441 ] 00:13:21.441 }' 00:13:21.441 12:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.441 12:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:21.441 12:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.441 [2024-11-19 12:33:26.687495] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:21.700 12:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.700 12:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:21.700 [2024-11-19 12:33:26.803464] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:21.959 [2024-11-19 12:33:27.133400] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:22.219 [2024-11-19 12:33:27.233204] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:22.219 [2024-11-19 12:33:27.235604] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.480 109.43 IOPS, 328.29 MiB/s [2024-11-19T12:33:27.741Z] 12:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:22.480 12:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.480 12:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.480 12:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.740 12:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.740 12:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.740 12:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.740 12:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.740 12:33:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.740 12:33:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.740 12:33:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.740 12:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.740 "name": "raid_bdev1", 00:13:22.740 "uuid": "b22befe0-37be-46b4-9d95-aac9091b409e", 00:13:22.740 "strip_size_kb": 0, 00:13:22.740 "state": "online", 00:13:22.740 "raid_level": "raid1", 00:13:22.740 "superblock": false, 00:13:22.740 "num_base_bdevs": 4, 00:13:22.740 "num_base_bdevs_discovered": 3, 00:13:22.740 "num_base_bdevs_operational": 3, 00:13:22.740 "base_bdevs_list": [ 00:13:22.740 { 00:13:22.740 "name": "spare", 00:13:22.740 "uuid": "6bd2fb88-9018-5546-b2bf-2956e38f90f6", 00:13:22.740 "is_configured": true, 00:13:22.740 "data_offset": 0, 00:13:22.740 "data_size": 65536 00:13:22.740 }, 00:13:22.740 { 00:13:22.740 "name": null, 00:13:22.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.740 "is_configured": false, 00:13:22.740 "data_offset": 0, 00:13:22.740 "data_size": 65536 00:13:22.740 }, 00:13:22.740 { 00:13:22.740 "name": "BaseBdev3", 00:13:22.740 "uuid": "8aab4f52-de67-527e-896f-bea2dbec8bac", 00:13:22.740 "is_configured": true, 00:13:22.740 "data_offset": 0, 00:13:22.740 "data_size": 65536 00:13:22.740 }, 00:13:22.740 { 00:13:22.740 "name": "BaseBdev4", 00:13:22.740 "uuid": "f5a471f9-5229-594c-bdbd-55dec2869c15", 00:13:22.740 "is_configured": true, 00:13:22.740 "data_offset": 0, 00:13:22.740 "data_size": 65536 00:13:22.740 } 00:13:22.740 ] 00:13:22.740 }' 00:13:22.740 12:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.740 12:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:22.740 12:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.740 12:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:22.740 12:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:22.740 12:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:22.740 12:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.740 12:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:22.740 12:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:22.740 12:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.740 12:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.740 12:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.740 12:33:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.740 12:33:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.740 12:33:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.740 12:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.740 "name": "raid_bdev1", 00:13:22.740 "uuid": "b22befe0-37be-46b4-9d95-aac9091b409e", 00:13:22.740 "strip_size_kb": 0, 00:13:22.740 "state": "online", 00:13:22.740 "raid_level": "raid1", 00:13:22.740 "superblock": false, 00:13:22.740 "num_base_bdevs": 4, 00:13:22.740 "num_base_bdevs_discovered": 3, 00:13:22.740 "num_base_bdevs_operational": 3, 00:13:22.740 "base_bdevs_list": [ 00:13:22.740 { 00:13:22.740 "name": "spare", 00:13:22.740 "uuid": "6bd2fb88-9018-5546-b2bf-2956e38f90f6", 00:13:22.740 "is_configured": true, 00:13:22.740 "data_offset": 0, 00:13:22.740 "data_size": 65536 00:13:22.740 }, 00:13:22.740 { 00:13:22.740 "name": null, 00:13:22.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.740 "is_configured": false, 00:13:22.740 "data_offset": 0, 00:13:22.740 "data_size": 65536 00:13:22.741 }, 00:13:22.741 { 00:13:22.741 "name": "BaseBdev3", 00:13:22.741 "uuid": "8aab4f52-de67-527e-896f-bea2dbec8bac", 00:13:22.741 "is_configured": true, 00:13:22.741 "data_offset": 0, 00:13:22.741 "data_size": 65536 00:13:22.741 }, 00:13:22.741 { 00:13:22.741 "name": "BaseBdev4", 00:13:22.741 "uuid": "f5a471f9-5229-594c-bdbd-55dec2869c15", 00:13:22.741 "is_configured": true, 00:13:22.741 "data_offset": 0, 00:13:22.741 "data_size": 65536 00:13:22.741 } 00:13:22.741 ] 00:13:22.741 }' 00:13:22.741 12:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.741 12:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:22.741 12:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.001 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:23.001 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:23.001 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.001 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.001 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.001 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.001 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:23.001 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.001 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.001 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.001 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.001 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.001 12:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.001 12:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.001 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.001 12:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.001 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.001 "name": "raid_bdev1", 00:13:23.001 "uuid": "b22befe0-37be-46b4-9d95-aac9091b409e", 00:13:23.001 "strip_size_kb": 0, 00:13:23.001 "state": "online", 00:13:23.001 "raid_level": "raid1", 00:13:23.001 "superblock": false, 00:13:23.001 "num_base_bdevs": 4, 00:13:23.001 "num_base_bdevs_discovered": 3, 00:13:23.001 "num_base_bdevs_operational": 3, 00:13:23.001 "base_bdevs_list": [ 00:13:23.001 { 00:13:23.001 "name": "spare", 00:13:23.001 "uuid": "6bd2fb88-9018-5546-b2bf-2956e38f90f6", 00:13:23.001 "is_configured": true, 00:13:23.001 "data_offset": 0, 00:13:23.001 "data_size": 65536 00:13:23.002 }, 00:13:23.002 { 00:13:23.002 "name": null, 00:13:23.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.002 "is_configured": false, 00:13:23.002 "data_offset": 0, 00:13:23.002 "data_size": 65536 00:13:23.002 }, 00:13:23.002 { 00:13:23.002 "name": "BaseBdev3", 00:13:23.002 "uuid": "8aab4f52-de67-527e-896f-bea2dbec8bac", 00:13:23.002 "is_configured": true, 00:13:23.002 "data_offset": 0, 00:13:23.002 "data_size": 65536 00:13:23.002 }, 00:13:23.002 { 00:13:23.002 "name": "BaseBdev4", 00:13:23.002 "uuid": "f5a471f9-5229-594c-bdbd-55dec2869c15", 00:13:23.002 "is_configured": true, 00:13:23.002 "data_offset": 0, 00:13:23.002 "data_size": 65536 00:13:23.002 } 00:13:23.002 ] 00:13:23.002 }' 00:13:23.002 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.002 12:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.262 100.75 IOPS, 302.25 MiB/s [2024-11-19T12:33:28.523Z] 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:23.262 12:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.262 12:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.262 [2024-11-19 12:33:28.481943] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:23.262 [2024-11-19 12:33:28.481978] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:23.522 00:13:23.522 Latency(us) 00:13:23.522 [2024-11-19T12:33:28.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.522 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:23.522 raid_bdev1 : 8.12 99.53 298.58 0.00 0.00 14274.07 298.70 109894.43 00:13:23.522 [2024-11-19T12:33:28.783Z] =================================================================================================================== 00:13:23.522 [2024-11-19T12:33:28.783Z] Total : 99.53 298.58 0.00 0.00 14274.07 298.70 109894.43 00:13:23.522 [2024-11-19 12:33:28.537147] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.522 [2024-11-19 12:33:28.537199] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:23.522 [2024-11-19 12:33:28.537300] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:23.522 [2024-11-19 12:33:28.537314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:23.522 { 00:13:23.522 "results": [ 00:13:23.522 { 00:13:23.522 "job": "raid_bdev1", 00:13:23.522 "core_mask": "0x1", 00:13:23.522 "workload": "randrw", 00:13:23.522 "percentage": 50, 00:13:23.522 "status": "finished", 00:13:23.522 "queue_depth": 2, 00:13:23.522 "io_size": 3145728, 00:13:23.522 "runtime": 8.118315, 00:13:23.522 "iops": 99.52804245708623, 00:13:23.522 "mibps": 298.58412737125866, 00:13:23.522 "io_failed": 0, 00:13:23.522 "io_timeout": 0, 00:13:23.522 "avg_latency_us": 14274.07254961304, 00:13:23.522 "min_latency_us": 298.70393013100437, 00:13:23.522 "max_latency_us": 109894.42794759825 00:13:23.522 } 00:13:23.522 ], 00:13:23.522 "core_count": 1 00:13:23.522 } 00:13:23.522 12:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.522 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.522 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:23.522 12:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.522 12:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.522 12:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.522 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:23.522 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:23.522 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:23.522 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:23.522 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:23.522 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:23.522 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:23.522 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:23.522 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:23.522 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:23.522 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:23.522 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:23.522 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:23.782 /dev/nbd0 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.782 1+0 records in 00:13:23.782 1+0 records out 00:13:23.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247749 s, 16.5 MB/s 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:23.782 12:33:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:24.042 /dev/nbd1 00:13:24.042 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:24.042 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:24.042 12:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:24.042 12:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:24.042 12:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:24.042 12:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:24.042 12:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:24.042 12:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:24.042 12:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:24.042 12:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:24.042 12:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.042 1+0 records in 00:13:24.042 1+0 records out 00:13:24.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000558265 s, 7.3 MB/s 00:13:24.042 12:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.042 12:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:24.042 12:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.042 12:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:24.042 12:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:24.042 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:24.042 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:24.042 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:24.042 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:24.042 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:24.042 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:24.042 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:24.042 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:24.042 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.042 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:24.302 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:24.302 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:24.302 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:24.302 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:24.302 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:24.302 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:24.302 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:24.302 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:24.302 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:24.302 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:24.302 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:24.302 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:24.302 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:24.302 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:24.302 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:24.302 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:24.302 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:24.302 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:24.302 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:24.302 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:24.562 /dev/nbd1 00:13:24.562 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:24.562 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:24.562 12:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:24.562 12:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:24.562 12:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:24.562 12:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:24.562 12:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:24.562 12:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:24.562 12:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:24.562 12:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:24.562 12:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.562 1+0 records in 00:13:24.562 1+0 records out 00:13:24.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000547565 s, 7.5 MB/s 00:13:24.562 12:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.562 12:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:24.562 12:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.562 12:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:24.562 12:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:24.562 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:24.562 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:24.562 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:24.562 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:24.562 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:24.562 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:24.562 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:24.562 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:24.562 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.562 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:24.823 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:24.823 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:24.823 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:24.823 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:24.823 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:24.823 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:24.823 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:24.823 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:24.823 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:24.823 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:24.823 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:24.823 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:24.823 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:24.823 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.823 12:33:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:25.083 12:33:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:25.083 12:33:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:25.083 12:33:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:25.083 12:33:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.083 12:33:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.083 12:33:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:25.083 12:33:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:25.083 12:33:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.083 12:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:25.083 12:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 89560 00:13:25.083 12:33:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 89560 ']' 00:13:25.083 12:33:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 89560 00:13:25.083 12:33:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:13:25.083 12:33:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:25.084 12:33:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89560 00:13:25.084 12:33:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:25.084 12:33:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:25.084 12:33:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89560' 00:13:25.084 killing process with pid 89560 00:13:25.084 12:33:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 89560 00:13:25.084 Received shutdown signal, test time was about 9.751469 seconds 00:13:25.084 00:13:25.084 Latency(us) 00:13:25.084 [2024-11-19T12:33:30.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.084 [2024-11-19T12:33:30.345Z] =================================================================================================================== 00:13:25.084 [2024-11-19T12:33:30.345Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:25.084 [2024-11-19 12:33:30.163340] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:25.084 12:33:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 89560 00:13:25.084 [2024-11-19 12:33:30.210552] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:25.344 00:13:25.344 real 0m11.755s 00:13:25.344 user 0m15.103s 00:13:25.344 sys 0m1.810s 00:13:25.344 ************************************ 00:13:25.344 END TEST raid_rebuild_test_io 00:13:25.344 ************************************ 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.344 12:33:30 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:13:25.344 12:33:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:25.344 12:33:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:25.344 12:33:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:25.344 ************************************ 00:13:25.344 START TEST raid_rebuild_test_sb_io 00:13:25.344 ************************************ 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89957 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89957 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 89957 ']' 00:13:25.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.344 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.345 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:25.345 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.345 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:25.345 12:33:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.605 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:25.605 Zero copy mechanism will not be used. 00:13:25.605 [2024-11-19 12:33:30.623123] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:25.605 [2024-11-19 12:33:30.623247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89957 ] 00:13:25.605 [2024-11-19 12:33:30.773956] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.605 [2024-11-19 12:33:30.818066] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.605 [2024-11-19 12:33:30.859543] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:25.605 [2024-11-19 12:33:30.859578] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:26.600 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:26.600 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:13:26.600 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:26.600 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:26.600 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.600 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.600 BaseBdev1_malloc 00:13:26.600 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.600 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:26.600 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.600 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.600 [2024-11-19 12:33:31.465806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:26.600 [2024-11-19 12:33:31.465925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.600 [2024-11-19 12:33:31.465970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:26.600 [2024-11-19 12:33:31.466013] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.600 [2024-11-19 12:33:31.468255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.600 [2024-11-19 12:33:31.468325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:26.600 BaseBdev1 00:13:26.600 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.600 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.601 BaseBdev2_malloc 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.601 [2024-11-19 12:33:31.504501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:26.601 [2024-11-19 12:33:31.504607] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.601 [2024-11-19 12:33:31.504635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:26.601 [2024-11-19 12:33:31.504645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.601 [2024-11-19 12:33:31.507101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.601 [2024-11-19 12:33:31.507141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:26.601 BaseBdev2 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.601 BaseBdev3_malloc 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.601 [2024-11-19 12:33:31.533147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:26.601 [2024-11-19 12:33:31.533193] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.601 [2024-11-19 12:33:31.533232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:26.601 [2024-11-19 12:33:31.533240] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.601 [2024-11-19 12:33:31.535276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.601 [2024-11-19 12:33:31.535312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:26.601 BaseBdev3 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.601 BaseBdev4_malloc 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.601 [2024-11-19 12:33:31.561511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:26.601 [2024-11-19 12:33:31.561604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.601 [2024-11-19 12:33:31.561631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:26.601 [2024-11-19 12:33:31.561639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.601 [2024-11-19 12:33:31.563652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.601 [2024-11-19 12:33:31.563688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:26.601 BaseBdev4 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.601 spare_malloc 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.601 spare_delay 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.601 [2024-11-19 12:33:31.601969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:26.601 [2024-11-19 12:33:31.602018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.601 [2024-11-19 12:33:31.602039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:26.601 [2024-11-19 12:33:31.602047] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.601 [2024-11-19 12:33:31.604154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.601 [2024-11-19 12:33:31.604234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:26.601 spare 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.601 [2024-11-19 12:33:31.614031] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:26.601 [2024-11-19 12:33:31.615910] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:26.601 [2024-11-19 12:33:31.616032] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:26.601 [2024-11-19 12:33:31.616106] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:26.601 [2024-11-19 12:33:31.616318] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:26.601 [2024-11-19 12:33:31.616363] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:26.601 [2024-11-19 12:33:31.616631] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:26.601 [2024-11-19 12:33:31.616819] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:26.601 [2024-11-19 12:33:31.616863] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:26.601 [2024-11-19 12:33:31.617022] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.601 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.602 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.602 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.602 "name": "raid_bdev1", 00:13:26.602 "uuid": "4cf0ea6b-9e15-42b3-9ae3-db8d45ec7ec6", 00:13:26.602 "strip_size_kb": 0, 00:13:26.602 "state": "online", 00:13:26.602 "raid_level": "raid1", 00:13:26.602 "superblock": true, 00:13:26.602 "num_base_bdevs": 4, 00:13:26.602 "num_base_bdevs_discovered": 4, 00:13:26.602 "num_base_bdevs_operational": 4, 00:13:26.602 "base_bdevs_list": [ 00:13:26.602 { 00:13:26.602 "name": "BaseBdev1", 00:13:26.602 "uuid": "a411065c-8256-5d3e-ab8c-21eaf06ed7e3", 00:13:26.602 "is_configured": true, 00:13:26.602 "data_offset": 2048, 00:13:26.602 "data_size": 63488 00:13:26.602 }, 00:13:26.602 { 00:13:26.602 "name": "BaseBdev2", 00:13:26.602 "uuid": "6d946bf6-a9fc-5bbe-a4b5-197d29c2d151", 00:13:26.602 "is_configured": true, 00:13:26.602 "data_offset": 2048, 00:13:26.602 "data_size": 63488 00:13:26.602 }, 00:13:26.602 { 00:13:26.602 "name": "BaseBdev3", 00:13:26.602 "uuid": "0a529eee-a77c-539a-9e05-eff70cb822fc", 00:13:26.602 "is_configured": true, 00:13:26.602 "data_offset": 2048, 00:13:26.602 "data_size": 63488 00:13:26.602 }, 00:13:26.602 { 00:13:26.602 "name": "BaseBdev4", 00:13:26.602 "uuid": "163e91dd-9cd4-5b39-97c7-02414b5239c6", 00:13:26.602 "is_configured": true, 00:13:26.602 "data_offset": 2048, 00:13:26.602 "data_size": 63488 00:13:26.602 } 00:13:26.602 ] 00:13:26.602 }' 00:13:26.602 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.602 12:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.886 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:26.886 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:26.886 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.886 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.886 [2024-11-19 12:33:32.061609] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:26.886 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.886 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:26.886 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.886 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:26.886 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.886 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.886 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.886 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:26.886 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:26.886 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:26.886 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:26.886 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.886 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.146 [2024-11-19 12:33:32.145129] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:27.146 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.146 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:27.146 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.146 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.146 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.146 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.146 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:27.146 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.146 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.146 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.146 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.146 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.146 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.146 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.146 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.146 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.146 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.146 "name": "raid_bdev1", 00:13:27.146 "uuid": "4cf0ea6b-9e15-42b3-9ae3-db8d45ec7ec6", 00:13:27.146 "strip_size_kb": 0, 00:13:27.146 "state": "online", 00:13:27.146 "raid_level": "raid1", 00:13:27.146 "superblock": true, 00:13:27.146 "num_base_bdevs": 4, 00:13:27.146 "num_base_bdevs_discovered": 3, 00:13:27.146 "num_base_bdevs_operational": 3, 00:13:27.146 "base_bdevs_list": [ 00:13:27.146 { 00:13:27.146 "name": null, 00:13:27.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.146 "is_configured": false, 00:13:27.146 "data_offset": 0, 00:13:27.146 "data_size": 63488 00:13:27.146 }, 00:13:27.146 { 00:13:27.146 "name": "BaseBdev2", 00:13:27.146 "uuid": "6d946bf6-a9fc-5bbe-a4b5-197d29c2d151", 00:13:27.146 "is_configured": true, 00:13:27.146 "data_offset": 2048, 00:13:27.146 "data_size": 63488 00:13:27.146 }, 00:13:27.146 { 00:13:27.146 "name": "BaseBdev3", 00:13:27.146 "uuid": "0a529eee-a77c-539a-9e05-eff70cb822fc", 00:13:27.146 "is_configured": true, 00:13:27.146 "data_offset": 2048, 00:13:27.146 "data_size": 63488 00:13:27.146 }, 00:13:27.146 { 00:13:27.146 "name": "BaseBdev4", 00:13:27.146 "uuid": "163e91dd-9cd4-5b39-97c7-02414b5239c6", 00:13:27.146 "is_configured": true, 00:13:27.146 "data_offset": 2048, 00:13:27.146 "data_size": 63488 00:13:27.146 } 00:13:27.146 ] 00:13:27.146 }' 00:13:27.146 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.146 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.146 [2024-11-19 12:33:32.218997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:27.146 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:27.146 Zero copy mechanism will not be used. 00:13:27.146 Running I/O for 60 seconds... 00:13:27.406 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:27.406 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.406 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.406 [2024-11-19 12:33:32.621400] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:27.406 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.406 12:33:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:27.406 [2024-11-19 12:33:32.664380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:27.667 [2024-11-19 12:33:32.666890] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:27.667 [2024-11-19 12:33:32.807996] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:27.926 [2024-11-19 12:33:33.068279] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:27.926 [2024-11-19 12:33:33.068909] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:28.186 223.00 IOPS, 669.00 MiB/s [2024-11-19T12:33:33.447Z] [2024-11-19 12:33:33.393471] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:28.186 [2024-11-19 12:33:33.394044] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:28.446 [2024-11-19 12:33:33.598844] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:28.446 [2024-11-19 12:33:33.599258] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:28.446 12:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:28.446 12:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.446 12:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:28.446 12:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:28.446 12:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.446 12:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.446 12:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.446 12:33:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.446 12:33:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.446 12:33:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.446 12:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.446 "name": "raid_bdev1", 00:13:28.446 "uuid": "4cf0ea6b-9e15-42b3-9ae3-db8d45ec7ec6", 00:13:28.446 "strip_size_kb": 0, 00:13:28.446 "state": "online", 00:13:28.446 "raid_level": "raid1", 00:13:28.446 "superblock": true, 00:13:28.446 "num_base_bdevs": 4, 00:13:28.446 "num_base_bdevs_discovered": 4, 00:13:28.446 "num_base_bdevs_operational": 4, 00:13:28.446 "process": { 00:13:28.446 "type": "rebuild", 00:13:28.446 "target": "spare", 00:13:28.446 "progress": { 00:13:28.446 "blocks": 10240, 00:13:28.446 "percent": 16 00:13:28.446 } 00:13:28.446 }, 00:13:28.446 "base_bdevs_list": [ 00:13:28.446 { 00:13:28.446 "name": "spare", 00:13:28.446 "uuid": "ce5d9383-6358-5796-8e02-ea1e7b56ebb7", 00:13:28.446 "is_configured": true, 00:13:28.446 "data_offset": 2048, 00:13:28.446 "data_size": 63488 00:13:28.446 }, 00:13:28.446 { 00:13:28.446 "name": "BaseBdev2", 00:13:28.446 "uuid": "6d946bf6-a9fc-5bbe-a4b5-197d29c2d151", 00:13:28.446 "is_configured": true, 00:13:28.446 "data_offset": 2048, 00:13:28.446 "data_size": 63488 00:13:28.446 }, 00:13:28.446 { 00:13:28.446 "name": "BaseBdev3", 00:13:28.446 "uuid": "0a529eee-a77c-539a-9e05-eff70cb822fc", 00:13:28.446 "is_configured": true, 00:13:28.446 "data_offset": 2048, 00:13:28.446 "data_size": 63488 00:13:28.446 }, 00:13:28.446 { 00:13:28.446 "name": "BaseBdev4", 00:13:28.446 "uuid": "163e91dd-9cd4-5b39-97c7-02414b5239c6", 00:13:28.446 "is_configured": true, 00:13:28.446 "data_offset": 2048, 00:13:28.446 "data_size": 63488 00:13:28.446 } 00:13:28.446 ] 00:13:28.446 }' 00:13:28.706 12:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.706 12:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:28.706 12:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.706 12:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:28.706 12:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:28.706 12:33:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.706 12:33:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.706 [2024-11-19 12:33:33.810583] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:28.706 [2024-11-19 12:33:33.848579] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:28.706 [2024-11-19 12:33:33.957071] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:28.967 [2024-11-19 12:33:33.969118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.967 [2024-11-19 12:33:33.969239] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:28.967 [2024-11-19 12:33:33.969272] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:28.967 [2024-11-19 12:33:33.990973] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:28.967 12:33:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.967 12:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:28.967 12:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.967 12:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.967 12:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.967 12:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.967 12:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:28.967 12:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.967 12:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.967 12:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.967 12:33:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.967 12:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.967 12:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.967 12:33:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.967 12:33:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.967 12:33:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.967 12:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.967 "name": "raid_bdev1", 00:13:28.967 "uuid": "4cf0ea6b-9e15-42b3-9ae3-db8d45ec7ec6", 00:13:28.967 "strip_size_kb": 0, 00:13:28.967 "state": "online", 00:13:28.967 "raid_level": "raid1", 00:13:28.967 "superblock": true, 00:13:28.967 "num_base_bdevs": 4, 00:13:28.967 "num_base_bdevs_discovered": 3, 00:13:28.967 "num_base_bdevs_operational": 3, 00:13:28.967 "base_bdevs_list": [ 00:13:28.967 { 00:13:28.967 "name": null, 00:13:28.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.967 "is_configured": false, 00:13:28.967 "data_offset": 0, 00:13:28.967 "data_size": 63488 00:13:28.967 }, 00:13:28.967 { 00:13:28.967 "name": "BaseBdev2", 00:13:28.967 "uuid": "6d946bf6-a9fc-5bbe-a4b5-197d29c2d151", 00:13:28.967 "is_configured": true, 00:13:28.967 "data_offset": 2048, 00:13:28.967 "data_size": 63488 00:13:28.967 }, 00:13:28.967 { 00:13:28.967 "name": "BaseBdev3", 00:13:28.967 "uuid": "0a529eee-a77c-539a-9e05-eff70cb822fc", 00:13:28.967 "is_configured": true, 00:13:28.967 "data_offset": 2048, 00:13:28.967 "data_size": 63488 00:13:28.967 }, 00:13:28.967 { 00:13:28.967 "name": "BaseBdev4", 00:13:28.967 "uuid": "163e91dd-9cd4-5b39-97c7-02414b5239c6", 00:13:28.967 "is_configured": true, 00:13:28.967 "data_offset": 2048, 00:13:28.967 "data_size": 63488 00:13:28.967 } 00:13:28.967 ] 00:13:28.967 }' 00:13:28.967 12:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.967 12:33:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.227 165.50 IOPS, 496.50 MiB/s [2024-11-19T12:33:34.488Z] 12:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:29.227 12:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.227 12:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:29.227 12:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:29.227 12:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.227 12:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.227 12:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.227 12:33:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.227 12:33:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.227 12:33:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.487 12:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.487 "name": "raid_bdev1", 00:13:29.487 "uuid": "4cf0ea6b-9e15-42b3-9ae3-db8d45ec7ec6", 00:13:29.487 "strip_size_kb": 0, 00:13:29.487 "state": "online", 00:13:29.487 "raid_level": "raid1", 00:13:29.487 "superblock": true, 00:13:29.487 "num_base_bdevs": 4, 00:13:29.487 "num_base_bdevs_discovered": 3, 00:13:29.487 "num_base_bdevs_operational": 3, 00:13:29.487 "base_bdevs_list": [ 00:13:29.487 { 00:13:29.487 "name": null, 00:13:29.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.487 "is_configured": false, 00:13:29.487 "data_offset": 0, 00:13:29.487 "data_size": 63488 00:13:29.487 }, 00:13:29.487 { 00:13:29.487 "name": "BaseBdev2", 00:13:29.487 "uuid": "6d946bf6-a9fc-5bbe-a4b5-197d29c2d151", 00:13:29.487 "is_configured": true, 00:13:29.487 "data_offset": 2048, 00:13:29.487 "data_size": 63488 00:13:29.487 }, 00:13:29.487 { 00:13:29.487 "name": "BaseBdev3", 00:13:29.487 "uuid": "0a529eee-a77c-539a-9e05-eff70cb822fc", 00:13:29.487 "is_configured": true, 00:13:29.487 "data_offset": 2048, 00:13:29.487 "data_size": 63488 00:13:29.487 }, 00:13:29.487 { 00:13:29.487 "name": "BaseBdev4", 00:13:29.487 "uuid": "163e91dd-9cd4-5b39-97c7-02414b5239c6", 00:13:29.487 "is_configured": true, 00:13:29.487 "data_offset": 2048, 00:13:29.487 "data_size": 63488 00:13:29.487 } 00:13:29.487 ] 00:13:29.487 }' 00:13:29.487 12:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.487 12:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:29.487 12:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.487 12:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:29.487 12:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:29.487 12:33:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.487 12:33:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.487 [2024-11-19 12:33:34.582175] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:29.487 12:33:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.487 12:33:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:29.487 [2024-11-19 12:33:34.625361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:29.487 [2024-11-19 12:33:34.627835] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:29.748 [2024-11-19 12:33:34.753219] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:29.748 [2024-11-19 12:33:34.753943] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:29.748 [2024-11-19 12:33:34.865451] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:29.748 [2024-11-19 12:33:34.865961] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:30.008 [2024-11-19 12:33:35.212560] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:30.267 162.00 IOPS, 486.00 MiB/s [2024-11-19T12:33:35.528Z] [2024-11-19 12:33:35.426089] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:30.267 [2024-11-19 12:33:35.426401] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:30.527 12:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:30.527 12:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.527 12:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:30.527 12:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:30.527 12:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.527 12:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.527 12:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.527 12:33:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.527 12:33:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.527 12:33:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.527 12:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.527 "name": "raid_bdev1", 00:13:30.527 "uuid": "4cf0ea6b-9e15-42b3-9ae3-db8d45ec7ec6", 00:13:30.527 "strip_size_kb": 0, 00:13:30.527 "state": "online", 00:13:30.527 "raid_level": "raid1", 00:13:30.527 "superblock": true, 00:13:30.527 "num_base_bdevs": 4, 00:13:30.527 "num_base_bdevs_discovered": 4, 00:13:30.527 "num_base_bdevs_operational": 4, 00:13:30.527 "process": { 00:13:30.527 "type": "rebuild", 00:13:30.527 "target": "spare", 00:13:30.528 "progress": { 00:13:30.528 "blocks": 12288, 00:13:30.528 "percent": 19 00:13:30.528 } 00:13:30.528 }, 00:13:30.528 "base_bdevs_list": [ 00:13:30.528 { 00:13:30.528 "name": "spare", 00:13:30.528 "uuid": "ce5d9383-6358-5796-8e02-ea1e7b56ebb7", 00:13:30.528 "is_configured": true, 00:13:30.528 "data_offset": 2048, 00:13:30.528 "data_size": 63488 00:13:30.528 }, 00:13:30.528 { 00:13:30.528 "name": "BaseBdev2", 00:13:30.528 "uuid": "6d946bf6-a9fc-5bbe-a4b5-197d29c2d151", 00:13:30.528 "is_configured": true, 00:13:30.528 "data_offset": 2048, 00:13:30.528 "data_size": 63488 00:13:30.528 }, 00:13:30.528 { 00:13:30.528 "name": "BaseBdev3", 00:13:30.528 "uuid": "0a529eee-a77c-539a-9e05-eff70cb822fc", 00:13:30.528 "is_configured": true, 00:13:30.528 "data_offset": 2048, 00:13:30.528 "data_size": 63488 00:13:30.528 }, 00:13:30.528 { 00:13:30.528 "name": "BaseBdev4", 00:13:30.528 "uuid": "163e91dd-9cd4-5b39-97c7-02414b5239c6", 00:13:30.528 "is_configured": true, 00:13:30.528 "data_offset": 2048, 00:13:30.528 "data_size": 63488 00:13:30.528 } 00:13:30.528 ] 00:13:30.528 }' 00:13:30.528 12:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.528 12:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:30.528 12:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.528 12:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:30.528 12:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:30.528 12:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:30.528 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:30.528 12:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:30.528 12:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:30.528 12:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:30.528 12:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:30.528 12:33:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.528 12:33:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.528 [2024-11-19 12:33:35.782284] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:30.787 [2024-11-19 12:33:35.866052] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:31.048 [2024-11-19 12:33:36.075730] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:13:31.048 [2024-11-19 12:33:36.075878] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:13:31.048 [2024-11-19 12:33:36.077930] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:31.048 [2024-11-19 12:33:36.078692] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:31.048 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.048 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:31.048 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:31.048 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:31.048 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.048 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:31.048 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:31.048 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.048 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.048 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.048 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.048 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.048 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.048 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.048 "name": "raid_bdev1", 00:13:31.048 "uuid": "4cf0ea6b-9e15-42b3-9ae3-db8d45ec7ec6", 00:13:31.048 "strip_size_kb": 0, 00:13:31.048 "state": "online", 00:13:31.048 "raid_level": "raid1", 00:13:31.048 "superblock": true, 00:13:31.048 "num_base_bdevs": 4, 00:13:31.048 "num_base_bdevs_discovered": 3, 00:13:31.048 "num_base_bdevs_operational": 3, 00:13:31.048 "process": { 00:13:31.048 "type": "rebuild", 00:13:31.048 "target": "spare", 00:13:31.048 "progress": { 00:13:31.048 "blocks": 16384, 00:13:31.048 "percent": 25 00:13:31.048 } 00:13:31.048 }, 00:13:31.048 "base_bdevs_list": [ 00:13:31.048 { 00:13:31.048 "name": "spare", 00:13:31.048 "uuid": "ce5d9383-6358-5796-8e02-ea1e7b56ebb7", 00:13:31.048 "is_configured": true, 00:13:31.048 "data_offset": 2048, 00:13:31.048 "data_size": 63488 00:13:31.048 }, 00:13:31.048 { 00:13:31.048 "name": null, 00:13:31.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.048 "is_configured": false, 00:13:31.048 "data_offset": 0, 00:13:31.048 "data_size": 63488 00:13:31.048 }, 00:13:31.048 { 00:13:31.048 "name": "BaseBdev3", 00:13:31.048 "uuid": "0a529eee-a77c-539a-9e05-eff70cb822fc", 00:13:31.048 "is_configured": true, 00:13:31.048 "data_offset": 2048, 00:13:31.048 "data_size": 63488 00:13:31.048 }, 00:13:31.048 { 00:13:31.048 "name": "BaseBdev4", 00:13:31.048 "uuid": "163e91dd-9cd4-5b39-97c7-02414b5239c6", 00:13:31.048 "is_configured": true, 00:13:31.049 "data_offset": 2048, 00:13:31.049 "data_size": 63488 00:13:31.049 } 00:13:31.049 ] 00:13:31.049 }' 00:13:31.049 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.049 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:31.049 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.049 135.75 IOPS, 407.25 MiB/s [2024-11-19T12:33:36.310Z] 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:31.049 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=409 00:13:31.049 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:31.049 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:31.049 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.049 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:31.049 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:31.049 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.049 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.049 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.049 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.049 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.049 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.049 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.049 "name": "raid_bdev1", 00:13:31.049 "uuid": "4cf0ea6b-9e15-42b3-9ae3-db8d45ec7ec6", 00:13:31.049 "strip_size_kb": 0, 00:13:31.049 "state": "online", 00:13:31.049 "raid_level": "raid1", 00:13:31.049 "superblock": true, 00:13:31.049 "num_base_bdevs": 4, 00:13:31.049 "num_base_bdevs_discovered": 3, 00:13:31.049 "num_base_bdevs_operational": 3, 00:13:31.049 "process": { 00:13:31.049 "type": "rebuild", 00:13:31.049 "target": "spare", 00:13:31.049 "progress": { 00:13:31.049 "blocks": 16384, 00:13:31.049 "percent": 25 00:13:31.049 } 00:13:31.049 }, 00:13:31.049 "base_bdevs_list": [ 00:13:31.049 { 00:13:31.049 "name": "spare", 00:13:31.049 "uuid": "ce5d9383-6358-5796-8e02-ea1e7b56ebb7", 00:13:31.049 "is_configured": true, 00:13:31.049 "data_offset": 2048, 00:13:31.049 "data_size": 63488 00:13:31.049 }, 00:13:31.049 { 00:13:31.049 "name": null, 00:13:31.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.049 "is_configured": false, 00:13:31.049 "data_offset": 0, 00:13:31.049 "data_size": 63488 00:13:31.049 }, 00:13:31.049 { 00:13:31.049 "name": "BaseBdev3", 00:13:31.049 "uuid": "0a529eee-a77c-539a-9e05-eff70cb822fc", 00:13:31.049 "is_configured": true, 00:13:31.049 "data_offset": 2048, 00:13:31.049 "data_size": 63488 00:13:31.049 }, 00:13:31.049 { 00:13:31.049 "name": "BaseBdev4", 00:13:31.049 "uuid": "163e91dd-9cd4-5b39-97c7-02414b5239c6", 00:13:31.049 "is_configured": true, 00:13:31.049 "data_offset": 2048, 00:13:31.049 "data_size": 63488 00:13:31.049 } 00:13:31.049 ] 00:13:31.049 }' 00:13:31.049 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.309 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:31.309 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.309 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:31.309 12:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:31.309 [2024-11-19 12:33:36.414087] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:31.309 [2024-11-19 12:33:36.537140] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:31.309 [2024-11-19 12:33:36.537492] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:31.569 [2024-11-19 12:33:36.772767] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:32.139 [2024-11-19 12:33:37.204008] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:32.139 [2024-11-19 12:33:37.205330] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:32.139 122.40 IOPS, 367.20 MiB/s [2024-11-19T12:33:37.400Z] 12:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:32.139 12:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:32.139 12:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.139 12:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:32.139 12:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:32.139 12:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.139 12:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.139 12:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.139 12:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.139 12:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.139 12:33:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.399 12:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.399 "name": "raid_bdev1", 00:13:32.399 "uuid": "4cf0ea6b-9e15-42b3-9ae3-db8d45ec7ec6", 00:13:32.399 "strip_size_kb": 0, 00:13:32.399 "state": "online", 00:13:32.399 "raid_level": "raid1", 00:13:32.399 "superblock": true, 00:13:32.399 "num_base_bdevs": 4, 00:13:32.399 "num_base_bdevs_discovered": 3, 00:13:32.399 "num_base_bdevs_operational": 3, 00:13:32.399 "process": { 00:13:32.399 "type": "rebuild", 00:13:32.399 "target": "spare", 00:13:32.399 "progress": { 00:13:32.399 "blocks": 32768, 00:13:32.399 "percent": 51 00:13:32.399 } 00:13:32.399 }, 00:13:32.399 "base_bdevs_list": [ 00:13:32.399 { 00:13:32.399 "name": "spare", 00:13:32.399 "uuid": "ce5d9383-6358-5796-8e02-ea1e7b56ebb7", 00:13:32.399 "is_configured": true, 00:13:32.399 "data_offset": 2048, 00:13:32.399 "data_size": 63488 00:13:32.399 }, 00:13:32.399 { 00:13:32.399 "name": null, 00:13:32.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.399 "is_configured": false, 00:13:32.399 "data_offset": 0, 00:13:32.399 "data_size": 63488 00:13:32.399 }, 00:13:32.399 { 00:13:32.399 "name": "BaseBdev3", 00:13:32.399 "uuid": "0a529eee-a77c-539a-9e05-eff70cb822fc", 00:13:32.399 "is_configured": true, 00:13:32.399 "data_offset": 2048, 00:13:32.399 "data_size": 63488 00:13:32.399 }, 00:13:32.399 { 00:13:32.399 "name": "BaseBdev4", 00:13:32.399 "uuid": "163e91dd-9cd4-5b39-97c7-02414b5239c6", 00:13:32.399 "is_configured": true, 00:13:32.399 "data_offset": 2048, 00:13:32.399 "data_size": 63488 00:13:32.399 } 00:13:32.399 ] 00:13:32.399 }' 00:13:32.399 12:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.399 12:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:32.399 12:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.399 12:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:32.399 12:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:32.399 [2024-11-19 12:33:37.649435] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:33.539 109.17 IOPS, 327.50 MiB/s [2024-11-19T12:33:38.800Z] 12:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:33.539 12:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:33.539 12:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.539 12:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:33.539 12:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:33.539 12:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.539 12:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.539 12:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.539 12:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.539 12:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.539 12:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.539 12:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.539 "name": "raid_bdev1", 00:13:33.539 "uuid": "4cf0ea6b-9e15-42b3-9ae3-db8d45ec7ec6", 00:13:33.539 "strip_size_kb": 0, 00:13:33.539 "state": "online", 00:13:33.539 "raid_level": "raid1", 00:13:33.539 "superblock": true, 00:13:33.539 "num_base_bdevs": 4, 00:13:33.539 "num_base_bdevs_discovered": 3, 00:13:33.539 "num_base_bdevs_operational": 3, 00:13:33.539 "process": { 00:13:33.539 "type": "rebuild", 00:13:33.539 "target": "spare", 00:13:33.539 "progress": { 00:13:33.539 "blocks": 53248, 00:13:33.539 "percent": 83 00:13:33.539 } 00:13:33.539 }, 00:13:33.539 "base_bdevs_list": [ 00:13:33.539 { 00:13:33.539 "name": "spare", 00:13:33.539 "uuid": "ce5d9383-6358-5796-8e02-ea1e7b56ebb7", 00:13:33.539 "is_configured": true, 00:13:33.539 "data_offset": 2048, 00:13:33.539 "data_size": 63488 00:13:33.539 }, 00:13:33.539 { 00:13:33.539 "name": null, 00:13:33.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.539 "is_configured": false, 00:13:33.539 "data_offset": 0, 00:13:33.539 "data_size": 63488 00:13:33.539 }, 00:13:33.539 { 00:13:33.539 "name": "BaseBdev3", 00:13:33.539 "uuid": "0a529eee-a77c-539a-9e05-eff70cb822fc", 00:13:33.539 "is_configured": true, 00:13:33.539 "data_offset": 2048, 00:13:33.539 "data_size": 63488 00:13:33.539 }, 00:13:33.539 { 00:13:33.539 "name": "BaseBdev4", 00:13:33.539 "uuid": "163e91dd-9cd4-5b39-97c7-02414b5239c6", 00:13:33.539 "is_configured": true, 00:13:33.539 "data_offset": 2048, 00:13:33.539 "data_size": 63488 00:13:33.539 } 00:13:33.539 ] 00:13:33.539 }' 00:13:33.539 12:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.539 12:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:33.539 12:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.539 [2024-11-19 12:33:38.636242] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:33.539 12:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:33.539 12:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:33.799 [2024-11-19 12:33:38.967473] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:34.059 [2024-11-19 12:33:39.067326] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:34.059 [2024-11-19 12:33:39.069421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.628 100.86 IOPS, 302.57 MiB/s [2024-11-19T12:33:39.889Z] 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:34.628 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:34.628 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.628 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:34.628 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:34.628 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.628 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.628 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.628 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.628 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.628 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.628 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.628 "name": "raid_bdev1", 00:13:34.628 "uuid": "4cf0ea6b-9e15-42b3-9ae3-db8d45ec7ec6", 00:13:34.628 "strip_size_kb": 0, 00:13:34.628 "state": "online", 00:13:34.628 "raid_level": "raid1", 00:13:34.628 "superblock": true, 00:13:34.628 "num_base_bdevs": 4, 00:13:34.628 "num_base_bdevs_discovered": 3, 00:13:34.628 "num_base_bdevs_operational": 3, 00:13:34.628 "base_bdevs_list": [ 00:13:34.628 { 00:13:34.628 "name": "spare", 00:13:34.628 "uuid": "ce5d9383-6358-5796-8e02-ea1e7b56ebb7", 00:13:34.628 "is_configured": true, 00:13:34.628 "data_offset": 2048, 00:13:34.628 "data_size": 63488 00:13:34.628 }, 00:13:34.628 { 00:13:34.628 "name": null, 00:13:34.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.628 "is_configured": false, 00:13:34.628 "data_offset": 0, 00:13:34.628 "data_size": 63488 00:13:34.628 }, 00:13:34.628 { 00:13:34.628 "name": "BaseBdev3", 00:13:34.628 "uuid": "0a529eee-a77c-539a-9e05-eff70cb822fc", 00:13:34.628 "is_configured": true, 00:13:34.628 "data_offset": 2048, 00:13:34.628 "data_size": 63488 00:13:34.628 }, 00:13:34.628 { 00:13:34.628 "name": "BaseBdev4", 00:13:34.628 "uuid": "163e91dd-9cd4-5b39-97c7-02414b5239c6", 00:13:34.628 "is_configured": true, 00:13:34.628 "data_offset": 2048, 00:13:34.628 "data_size": 63488 00:13:34.628 } 00:13:34.628 ] 00:13:34.628 }' 00:13:34.628 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.628 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:34.628 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.628 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:34.628 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:34.628 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:34.628 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.628 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:34.628 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:34.628 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.628 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.628 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.628 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.628 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.628 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.628 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.628 "name": "raid_bdev1", 00:13:34.628 "uuid": "4cf0ea6b-9e15-42b3-9ae3-db8d45ec7ec6", 00:13:34.628 "strip_size_kb": 0, 00:13:34.628 "state": "online", 00:13:34.628 "raid_level": "raid1", 00:13:34.628 "superblock": true, 00:13:34.628 "num_base_bdevs": 4, 00:13:34.628 "num_base_bdevs_discovered": 3, 00:13:34.628 "num_base_bdevs_operational": 3, 00:13:34.628 "base_bdevs_list": [ 00:13:34.628 { 00:13:34.628 "name": "spare", 00:13:34.628 "uuid": "ce5d9383-6358-5796-8e02-ea1e7b56ebb7", 00:13:34.628 "is_configured": true, 00:13:34.628 "data_offset": 2048, 00:13:34.628 "data_size": 63488 00:13:34.628 }, 00:13:34.628 { 00:13:34.628 "name": null, 00:13:34.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.628 "is_configured": false, 00:13:34.628 "data_offset": 0, 00:13:34.628 "data_size": 63488 00:13:34.628 }, 00:13:34.628 { 00:13:34.628 "name": "BaseBdev3", 00:13:34.628 "uuid": "0a529eee-a77c-539a-9e05-eff70cb822fc", 00:13:34.628 "is_configured": true, 00:13:34.628 "data_offset": 2048, 00:13:34.628 "data_size": 63488 00:13:34.628 }, 00:13:34.628 { 00:13:34.628 "name": "BaseBdev4", 00:13:34.628 "uuid": "163e91dd-9cd4-5b39-97c7-02414b5239c6", 00:13:34.628 "is_configured": true, 00:13:34.628 "data_offset": 2048, 00:13:34.628 "data_size": 63488 00:13:34.628 } 00:13:34.628 ] 00:13:34.628 }' 00:13:34.628 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.628 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:34.628 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.888 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:34.888 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:34.888 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.888 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.888 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.888 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.888 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:34.888 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.888 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.888 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.888 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.888 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.888 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.888 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.888 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.888 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.888 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.888 "name": "raid_bdev1", 00:13:34.888 "uuid": "4cf0ea6b-9e15-42b3-9ae3-db8d45ec7ec6", 00:13:34.888 "strip_size_kb": 0, 00:13:34.888 "state": "online", 00:13:34.888 "raid_level": "raid1", 00:13:34.888 "superblock": true, 00:13:34.888 "num_base_bdevs": 4, 00:13:34.888 "num_base_bdevs_discovered": 3, 00:13:34.888 "num_base_bdevs_operational": 3, 00:13:34.888 "base_bdevs_list": [ 00:13:34.888 { 00:13:34.888 "name": "spare", 00:13:34.888 "uuid": "ce5d9383-6358-5796-8e02-ea1e7b56ebb7", 00:13:34.888 "is_configured": true, 00:13:34.888 "data_offset": 2048, 00:13:34.888 "data_size": 63488 00:13:34.888 }, 00:13:34.888 { 00:13:34.888 "name": null, 00:13:34.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.888 "is_configured": false, 00:13:34.888 "data_offset": 0, 00:13:34.888 "data_size": 63488 00:13:34.888 }, 00:13:34.888 { 00:13:34.888 "name": "BaseBdev3", 00:13:34.888 "uuid": "0a529eee-a77c-539a-9e05-eff70cb822fc", 00:13:34.888 "is_configured": true, 00:13:34.888 "data_offset": 2048, 00:13:34.888 "data_size": 63488 00:13:34.888 }, 00:13:34.888 { 00:13:34.888 "name": "BaseBdev4", 00:13:34.888 "uuid": "163e91dd-9cd4-5b39-97c7-02414b5239c6", 00:13:34.888 "is_configured": true, 00:13:34.888 "data_offset": 2048, 00:13:34.888 "data_size": 63488 00:13:34.888 } 00:13:34.888 ] 00:13:34.888 }' 00:13:34.888 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.888 12:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.147 92.38 IOPS, 277.12 MiB/s [2024-11-19T12:33:40.408Z] 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:35.147 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.147 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.147 [2024-11-19 12:33:40.374237] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:35.147 [2024-11-19 12:33:40.374336] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:35.406 00:13:35.406 Latency(us) 00:13:35.406 [2024-11-19T12:33:40.667Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.406 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:35.406 raid_bdev1 : 8.26 90.31 270.92 0.00 0.00 15479.77 293.34 117220.72 00:13:35.406 [2024-11-19T12:33:40.667Z] =================================================================================================================== 00:13:35.406 [2024-11-19T12:33:40.667Z] Total : 90.31 270.92 0.00 0.00 15479.77 293.34 117220.72 00:13:35.406 [2024-11-19 12:33:40.469101] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.406 [2024-11-19 12:33:40.469181] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:35.406 [2024-11-19 12:33:40.469309] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:35.406 [2024-11-19 12:33:40.469352] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:35.406 { 00:13:35.406 "results": [ 00:13:35.406 { 00:13:35.406 "job": "raid_bdev1", 00:13:35.406 "core_mask": "0x1", 00:13:35.406 "workload": "randrw", 00:13:35.406 "percentage": 50, 00:13:35.406 "status": "finished", 00:13:35.406 "queue_depth": 2, 00:13:35.406 "io_size": 3145728, 00:13:35.406 "runtime": 8.260877, 00:13:35.406 "iops": 90.30518188323104, 00:13:35.406 "mibps": 270.9155456496931, 00:13:35.406 "io_failed": 0, 00:13:35.406 "io_timeout": 0, 00:13:35.406 "avg_latency_us": 15479.768570659238, 00:13:35.406 "min_latency_us": 293.3379912663755, 00:13:35.406 "max_latency_us": 117220.7231441048 00:13:35.406 } 00:13:35.406 ], 00:13:35.406 "core_count": 1 00:13:35.406 } 00:13:35.407 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.407 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:35.407 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.407 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.407 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.407 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.407 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:35.407 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:35.407 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:35.407 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:35.407 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:35.407 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:35.407 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:35.407 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:35.407 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:35.407 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:35.407 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:35.407 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:35.407 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:35.717 /dev/nbd0 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.717 1+0 records in 00:13:35.717 1+0 records out 00:13:35.717 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000595966 s, 6.9 MB/s 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:35.717 12:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:35.977 /dev/nbd1 00:13:35.977 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:35.977 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:35.977 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:35.977 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:35.977 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:35.977 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:35.977 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:35.977 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:35.977 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:35.977 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:35.977 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.977 1+0 records in 00:13:35.977 1+0 records out 00:13:35.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051913 s, 7.9 MB/s 00:13:35.977 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.977 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:35.977 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.977 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:35.977 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:35.977 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:35.977 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:35.977 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:35.977 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:35.977 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:35.977 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:35.977 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:35.977 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:35.977 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:35.977 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:36.236 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:36.236 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:36.236 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:36.236 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:36.236 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:36.236 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:36.236 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:36.236 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:36.236 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:36.236 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:36.236 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:36.236 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:36.236 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:36.236 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:36.236 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:36.236 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:36.236 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:36.236 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:36.236 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:36.236 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:36.495 /dev/nbd1 00:13:36.495 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:36.495 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:36.495 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:36.495 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:36.495 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:36.496 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:36.496 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:36.496 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:36.496 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:36.496 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:36.496 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.496 1+0 records in 00:13:36.496 1+0 records out 00:13:36.496 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439016 s, 9.3 MB/s 00:13:36.496 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.496 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:36.496 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.496 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:36.496 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:36.496 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:36.496 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:36.496 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:36.496 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:36.496 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:36.496 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:36.496 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:36.496 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:36.496 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:36.496 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:36.754 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:36.754 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:36.754 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:36.754 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:36.754 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:36.754 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:36.754 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:36.754 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:36.754 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:36.755 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:36.755 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:36.755 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:36.755 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:36.755 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:36.755 12:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:37.013 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:37.013 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:37.013 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:37.013 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:37.013 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:37.013 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:37.013 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:37.013 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:37.013 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:37.013 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:37.013 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.013 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.013 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.013 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:37.013 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.013 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.013 [2024-11-19 12:33:42.108289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:37.013 [2024-11-19 12:33:42.108385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.013 [2024-11-19 12:33:42.108445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:37.013 [2024-11-19 12:33:42.108476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.014 [2024-11-19 12:33:42.110598] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.014 [2024-11-19 12:33:42.110668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:37.014 [2024-11-19 12:33:42.110812] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:37.014 [2024-11-19 12:33:42.110897] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:37.014 [2024-11-19 12:33:42.111050] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:37.014 [2024-11-19 12:33:42.111185] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:37.014 spare 00:13:37.014 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.014 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:37.014 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.014 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.014 [2024-11-19 12:33:42.211113] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:13:37.014 [2024-11-19 12:33:42.211175] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:37.014 [2024-11-19 12:33:42.211457] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000036fc0 00:13:37.014 [2024-11-19 12:33:42.211632] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:13:37.014 [2024-11-19 12:33:42.211680] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:13:37.014 [2024-11-19 12:33:42.211861] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.014 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.014 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:37.014 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.014 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.014 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.014 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.014 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:37.014 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.014 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.014 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.014 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.014 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.014 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.014 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.014 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.014 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.014 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.014 "name": "raid_bdev1", 00:13:37.014 "uuid": "4cf0ea6b-9e15-42b3-9ae3-db8d45ec7ec6", 00:13:37.014 "strip_size_kb": 0, 00:13:37.014 "state": "online", 00:13:37.014 "raid_level": "raid1", 00:13:37.014 "superblock": true, 00:13:37.014 "num_base_bdevs": 4, 00:13:37.014 "num_base_bdevs_discovered": 3, 00:13:37.014 "num_base_bdevs_operational": 3, 00:13:37.014 "base_bdevs_list": [ 00:13:37.014 { 00:13:37.014 "name": "spare", 00:13:37.014 "uuid": "ce5d9383-6358-5796-8e02-ea1e7b56ebb7", 00:13:37.014 "is_configured": true, 00:13:37.014 "data_offset": 2048, 00:13:37.014 "data_size": 63488 00:13:37.014 }, 00:13:37.014 { 00:13:37.014 "name": null, 00:13:37.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.014 "is_configured": false, 00:13:37.014 "data_offset": 2048, 00:13:37.014 "data_size": 63488 00:13:37.014 }, 00:13:37.014 { 00:13:37.014 "name": "BaseBdev3", 00:13:37.014 "uuid": "0a529eee-a77c-539a-9e05-eff70cb822fc", 00:13:37.014 "is_configured": true, 00:13:37.014 "data_offset": 2048, 00:13:37.014 "data_size": 63488 00:13:37.014 }, 00:13:37.014 { 00:13:37.014 "name": "BaseBdev4", 00:13:37.014 "uuid": "163e91dd-9cd4-5b39-97c7-02414b5239c6", 00:13:37.014 "is_configured": true, 00:13:37.014 "data_offset": 2048, 00:13:37.014 "data_size": 63488 00:13:37.014 } 00:13:37.014 ] 00:13:37.014 }' 00:13:37.014 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.014 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.581 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:37.581 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.581 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:37.581 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:37.581 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.581 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.581 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.581 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.581 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.581 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.581 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.581 "name": "raid_bdev1", 00:13:37.581 "uuid": "4cf0ea6b-9e15-42b3-9ae3-db8d45ec7ec6", 00:13:37.581 "strip_size_kb": 0, 00:13:37.581 "state": "online", 00:13:37.581 "raid_level": "raid1", 00:13:37.581 "superblock": true, 00:13:37.581 "num_base_bdevs": 4, 00:13:37.581 "num_base_bdevs_discovered": 3, 00:13:37.581 "num_base_bdevs_operational": 3, 00:13:37.581 "base_bdevs_list": [ 00:13:37.581 { 00:13:37.581 "name": "spare", 00:13:37.581 "uuid": "ce5d9383-6358-5796-8e02-ea1e7b56ebb7", 00:13:37.581 "is_configured": true, 00:13:37.581 "data_offset": 2048, 00:13:37.581 "data_size": 63488 00:13:37.581 }, 00:13:37.581 { 00:13:37.581 "name": null, 00:13:37.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.581 "is_configured": false, 00:13:37.581 "data_offset": 2048, 00:13:37.581 "data_size": 63488 00:13:37.581 }, 00:13:37.581 { 00:13:37.581 "name": "BaseBdev3", 00:13:37.582 "uuid": "0a529eee-a77c-539a-9e05-eff70cb822fc", 00:13:37.582 "is_configured": true, 00:13:37.582 "data_offset": 2048, 00:13:37.582 "data_size": 63488 00:13:37.582 }, 00:13:37.582 { 00:13:37.582 "name": "BaseBdev4", 00:13:37.582 "uuid": "163e91dd-9cd4-5b39-97c7-02414b5239c6", 00:13:37.582 "is_configured": true, 00:13:37.582 "data_offset": 2048, 00:13:37.582 "data_size": 63488 00:13:37.582 } 00:13:37.582 ] 00:13:37.582 }' 00:13:37.582 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.582 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:37.582 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.582 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:37.582 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:37.582 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.582 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.582 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.582 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.582 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.582 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:37.582 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.582 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.840 [2024-11-19 12:33:42.843142] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:37.840 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.840 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:37.840 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.840 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.840 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.840 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.840 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:37.840 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.840 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.840 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.840 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.840 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.840 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.840 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.840 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.840 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.840 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.840 "name": "raid_bdev1", 00:13:37.840 "uuid": "4cf0ea6b-9e15-42b3-9ae3-db8d45ec7ec6", 00:13:37.840 "strip_size_kb": 0, 00:13:37.840 "state": "online", 00:13:37.840 "raid_level": "raid1", 00:13:37.840 "superblock": true, 00:13:37.840 "num_base_bdevs": 4, 00:13:37.840 "num_base_bdevs_discovered": 2, 00:13:37.840 "num_base_bdevs_operational": 2, 00:13:37.840 "base_bdevs_list": [ 00:13:37.840 { 00:13:37.840 "name": null, 00:13:37.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.840 "is_configured": false, 00:13:37.840 "data_offset": 0, 00:13:37.840 "data_size": 63488 00:13:37.840 }, 00:13:37.840 { 00:13:37.840 "name": null, 00:13:37.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.840 "is_configured": false, 00:13:37.840 "data_offset": 2048, 00:13:37.840 "data_size": 63488 00:13:37.840 }, 00:13:37.840 { 00:13:37.840 "name": "BaseBdev3", 00:13:37.840 "uuid": "0a529eee-a77c-539a-9e05-eff70cb822fc", 00:13:37.840 "is_configured": true, 00:13:37.840 "data_offset": 2048, 00:13:37.840 "data_size": 63488 00:13:37.840 }, 00:13:37.840 { 00:13:37.840 "name": "BaseBdev4", 00:13:37.840 "uuid": "163e91dd-9cd4-5b39-97c7-02414b5239c6", 00:13:37.840 "is_configured": true, 00:13:37.840 "data_offset": 2048, 00:13:37.840 "data_size": 63488 00:13:37.840 } 00:13:37.840 ] 00:13:37.840 }' 00:13:37.840 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.840 12:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.099 12:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:38.099 12:33:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.099 12:33:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.099 [2024-11-19 12:33:43.318534] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:38.099 [2024-11-19 12:33:43.318802] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:38.099 [2024-11-19 12:33:43.318876] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:38.099 [2024-11-19 12:33:43.318939] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:38.099 [2024-11-19 12:33:43.322558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037090 00:13:38.099 12:33:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.099 12:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:38.099 [2024-11-19 12:33:43.324655] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.477 "name": "raid_bdev1", 00:13:39.477 "uuid": "4cf0ea6b-9e15-42b3-9ae3-db8d45ec7ec6", 00:13:39.477 "strip_size_kb": 0, 00:13:39.477 "state": "online", 00:13:39.477 "raid_level": "raid1", 00:13:39.477 "superblock": true, 00:13:39.477 "num_base_bdevs": 4, 00:13:39.477 "num_base_bdevs_discovered": 3, 00:13:39.477 "num_base_bdevs_operational": 3, 00:13:39.477 "process": { 00:13:39.477 "type": "rebuild", 00:13:39.477 "target": "spare", 00:13:39.477 "progress": { 00:13:39.477 "blocks": 20480, 00:13:39.477 "percent": 32 00:13:39.477 } 00:13:39.477 }, 00:13:39.477 "base_bdevs_list": [ 00:13:39.477 { 00:13:39.477 "name": "spare", 00:13:39.477 "uuid": "ce5d9383-6358-5796-8e02-ea1e7b56ebb7", 00:13:39.477 "is_configured": true, 00:13:39.477 "data_offset": 2048, 00:13:39.477 "data_size": 63488 00:13:39.477 }, 00:13:39.477 { 00:13:39.477 "name": null, 00:13:39.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.477 "is_configured": false, 00:13:39.477 "data_offset": 2048, 00:13:39.477 "data_size": 63488 00:13:39.477 }, 00:13:39.477 { 00:13:39.477 "name": "BaseBdev3", 00:13:39.477 "uuid": "0a529eee-a77c-539a-9e05-eff70cb822fc", 00:13:39.477 "is_configured": true, 00:13:39.477 "data_offset": 2048, 00:13:39.477 "data_size": 63488 00:13:39.477 }, 00:13:39.477 { 00:13:39.477 "name": "BaseBdev4", 00:13:39.477 "uuid": "163e91dd-9cd4-5b39-97c7-02414b5239c6", 00:13:39.477 "is_configured": true, 00:13:39.477 "data_offset": 2048, 00:13:39.477 "data_size": 63488 00:13:39.477 } 00:13:39.477 ] 00:13:39.477 }' 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.477 [2024-11-19 12:33:44.485184] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:39.477 [2024-11-19 12:33:44.528662] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:39.477 [2024-11-19 12:33:44.528720] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.477 [2024-11-19 12:33:44.528755] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:39.477 [2024-11-19 12:33:44.528778] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.477 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.477 "name": "raid_bdev1", 00:13:39.477 "uuid": "4cf0ea6b-9e15-42b3-9ae3-db8d45ec7ec6", 00:13:39.477 "strip_size_kb": 0, 00:13:39.477 "state": "online", 00:13:39.477 "raid_level": "raid1", 00:13:39.477 "superblock": true, 00:13:39.477 "num_base_bdevs": 4, 00:13:39.477 "num_base_bdevs_discovered": 2, 00:13:39.477 "num_base_bdevs_operational": 2, 00:13:39.477 "base_bdevs_list": [ 00:13:39.477 { 00:13:39.477 "name": null, 00:13:39.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.477 "is_configured": false, 00:13:39.477 "data_offset": 0, 00:13:39.477 "data_size": 63488 00:13:39.477 }, 00:13:39.477 { 00:13:39.477 "name": null, 00:13:39.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.477 "is_configured": false, 00:13:39.477 "data_offset": 2048, 00:13:39.477 "data_size": 63488 00:13:39.477 }, 00:13:39.477 { 00:13:39.477 "name": "BaseBdev3", 00:13:39.477 "uuid": "0a529eee-a77c-539a-9e05-eff70cb822fc", 00:13:39.477 "is_configured": true, 00:13:39.477 "data_offset": 2048, 00:13:39.477 "data_size": 63488 00:13:39.477 }, 00:13:39.477 { 00:13:39.477 "name": "BaseBdev4", 00:13:39.477 "uuid": "163e91dd-9cd4-5b39-97c7-02414b5239c6", 00:13:39.477 "is_configured": true, 00:13:39.477 "data_offset": 2048, 00:13:39.477 "data_size": 63488 00:13:39.477 } 00:13:39.477 ] 00:13:39.477 }' 00:13:39.478 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.478 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.736 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:39.736 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.736 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.736 [2024-11-19 12:33:44.975996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:39.736 [2024-11-19 12:33:44.976098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.736 [2024-11-19 12:33:44.976158] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:39.736 [2024-11-19 12:33:44.976188] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.736 [2024-11-19 12:33:44.976637] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.736 [2024-11-19 12:33:44.976695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:39.736 [2024-11-19 12:33:44.976814] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:39.736 [2024-11-19 12:33:44.976853] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:39.736 [2024-11-19 12:33:44.976911] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:39.736 [2024-11-19 12:33:44.976970] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:39.736 [2024-11-19 12:33:44.980421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:13:39.736 spare 00:13:39.736 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.736 12:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:39.736 [2024-11-19 12:33:44.982300] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:41.110 12:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.110 12:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.110 12:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.110 12:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.110 12:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.110 12:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.110 12:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.110 12:33:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.110 12:33:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.110 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.110 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.110 "name": "raid_bdev1", 00:13:41.110 "uuid": "4cf0ea6b-9e15-42b3-9ae3-db8d45ec7ec6", 00:13:41.110 "strip_size_kb": 0, 00:13:41.110 "state": "online", 00:13:41.110 "raid_level": "raid1", 00:13:41.110 "superblock": true, 00:13:41.110 "num_base_bdevs": 4, 00:13:41.110 "num_base_bdevs_discovered": 3, 00:13:41.110 "num_base_bdevs_operational": 3, 00:13:41.110 "process": { 00:13:41.110 "type": "rebuild", 00:13:41.111 "target": "spare", 00:13:41.111 "progress": { 00:13:41.111 "blocks": 20480, 00:13:41.111 "percent": 32 00:13:41.111 } 00:13:41.111 }, 00:13:41.111 "base_bdevs_list": [ 00:13:41.111 { 00:13:41.111 "name": "spare", 00:13:41.111 "uuid": "ce5d9383-6358-5796-8e02-ea1e7b56ebb7", 00:13:41.111 "is_configured": true, 00:13:41.111 "data_offset": 2048, 00:13:41.111 "data_size": 63488 00:13:41.111 }, 00:13:41.111 { 00:13:41.111 "name": null, 00:13:41.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.111 "is_configured": false, 00:13:41.111 "data_offset": 2048, 00:13:41.111 "data_size": 63488 00:13:41.111 }, 00:13:41.111 { 00:13:41.111 "name": "BaseBdev3", 00:13:41.111 "uuid": "0a529eee-a77c-539a-9e05-eff70cb822fc", 00:13:41.111 "is_configured": true, 00:13:41.111 "data_offset": 2048, 00:13:41.111 "data_size": 63488 00:13:41.111 }, 00:13:41.111 { 00:13:41.111 "name": "BaseBdev4", 00:13:41.111 "uuid": "163e91dd-9cd4-5b39-97c7-02414b5239c6", 00:13:41.111 "is_configured": true, 00:13:41.111 "data_offset": 2048, 00:13:41.111 "data_size": 63488 00:13:41.111 } 00:13:41.111 ] 00:13:41.111 }' 00:13:41.111 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.111 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.111 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.111 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.111 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:41.111 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.111 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.111 [2024-11-19 12:33:46.135354] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:41.111 [2024-11-19 12:33:46.186285] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:41.111 [2024-11-19 12:33:46.186362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.111 [2024-11-19 12:33:46.186377] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:41.111 [2024-11-19 12:33:46.186386] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:41.111 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.111 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:41.111 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.111 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.111 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.111 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.111 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:41.111 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.111 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.111 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.111 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.111 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.111 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.111 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.111 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.111 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.111 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.111 "name": "raid_bdev1", 00:13:41.111 "uuid": "4cf0ea6b-9e15-42b3-9ae3-db8d45ec7ec6", 00:13:41.111 "strip_size_kb": 0, 00:13:41.111 "state": "online", 00:13:41.111 "raid_level": "raid1", 00:13:41.111 "superblock": true, 00:13:41.111 "num_base_bdevs": 4, 00:13:41.111 "num_base_bdevs_discovered": 2, 00:13:41.111 "num_base_bdevs_operational": 2, 00:13:41.111 "base_bdevs_list": [ 00:13:41.111 { 00:13:41.111 "name": null, 00:13:41.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.111 "is_configured": false, 00:13:41.111 "data_offset": 0, 00:13:41.111 "data_size": 63488 00:13:41.111 }, 00:13:41.111 { 00:13:41.111 "name": null, 00:13:41.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.111 "is_configured": false, 00:13:41.111 "data_offset": 2048, 00:13:41.111 "data_size": 63488 00:13:41.111 }, 00:13:41.111 { 00:13:41.111 "name": "BaseBdev3", 00:13:41.111 "uuid": "0a529eee-a77c-539a-9e05-eff70cb822fc", 00:13:41.111 "is_configured": true, 00:13:41.111 "data_offset": 2048, 00:13:41.111 "data_size": 63488 00:13:41.111 }, 00:13:41.111 { 00:13:41.111 "name": "BaseBdev4", 00:13:41.111 "uuid": "163e91dd-9cd4-5b39-97c7-02414b5239c6", 00:13:41.111 "is_configured": true, 00:13:41.111 "data_offset": 2048, 00:13:41.111 "data_size": 63488 00:13:41.111 } 00:13:41.111 ] 00:13:41.111 }' 00:13:41.111 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.111 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.678 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:41.678 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.678 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:41.678 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:41.678 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.678 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.678 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.678 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.678 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.678 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.678 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.678 "name": "raid_bdev1", 00:13:41.678 "uuid": "4cf0ea6b-9e15-42b3-9ae3-db8d45ec7ec6", 00:13:41.678 "strip_size_kb": 0, 00:13:41.678 "state": "online", 00:13:41.678 "raid_level": "raid1", 00:13:41.678 "superblock": true, 00:13:41.678 "num_base_bdevs": 4, 00:13:41.678 "num_base_bdevs_discovered": 2, 00:13:41.678 "num_base_bdevs_operational": 2, 00:13:41.678 "base_bdevs_list": [ 00:13:41.678 { 00:13:41.678 "name": null, 00:13:41.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.678 "is_configured": false, 00:13:41.678 "data_offset": 0, 00:13:41.678 "data_size": 63488 00:13:41.678 }, 00:13:41.678 { 00:13:41.678 "name": null, 00:13:41.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.678 "is_configured": false, 00:13:41.678 "data_offset": 2048, 00:13:41.678 "data_size": 63488 00:13:41.678 }, 00:13:41.678 { 00:13:41.678 "name": "BaseBdev3", 00:13:41.678 "uuid": "0a529eee-a77c-539a-9e05-eff70cb822fc", 00:13:41.678 "is_configured": true, 00:13:41.678 "data_offset": 2048, 00:13:41.678 "data_size": 63488 00:13:41.678 }, 00:13:41.678 { 00:13:41.678 "name": "BaseBdev4", 00:13:41.678 "uuid": "163e91dd-9cd4-5b39-97c7-02414b5239c6", 00:13:41.678 "is_configured": true, 00:13:41.678 "data_offset": 2048, 00:13:41.678 "data_size": 63488 00:13:41.678 } 00:13:41.678 ] 00:13:41.678 }' 00:13:41.678 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.678 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:41.678 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.678 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:41.678 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:41.678 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.678 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.678 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.678 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:41.678 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.678 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.678 [2024-11-19 12:33:46.837314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:41.678 [2024-11-19 12:33:46.837373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.679 [2024-11-19 12:33:46.837393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:13:41.679 [2024-11-19 12:33:46.837403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.679 [2024-11-19 12:33:46.837849] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.679 [2024-11-19 12:33:46.837891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:41.679 [2024-11-19 12:33:46.837981] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:41.679 [2024-11-19 12:33:46.838007] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:41.679 [2024-11-19 12:33:46.838015] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:41.679 [2024-11-19 12:33:46.838029] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:41.679 BaseBdev1 00:13:41.679 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.679 12:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:42.613 12:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:42.613 12:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.613 12:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.613 12:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.613 12:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.613 12:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:42.613 12:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.613 12:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.613 12:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.613 12:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.613 12:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.613 12:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.613 12:33:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.613 12:33:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.871 12:33:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.871 12:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.871 "name": "raid_bdev1", 00:13:42.871 "uuid": "4cf0ea6b-9e15-42b3-9ae3-db8d45ec7ec6", 00:13:42.871 "strip_size_kb": 0, 00:13:42.871 "state": "online", 00:13:42.871 "raid_level": "raid1", 00:13:42.871 "superblock": true, 00:13:42.871 "num_base_bdevs": 4, 00:13:42.871 "num_base_bdevs_discovered": 2, 00:13:42.871 "num_base_bdevs_operational": 2, 00:13:42.871 "base_bdevs_list": [ 00:13:42.871 { 00:13:42.871 "name": null, 00:13:42.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.871 "is_configured": false, 00:13:42.871 "data_offset": 0, 00:13:42.871 "data_size": 63488 00:13:42.871 }, 00:13:42.871 { 00:13:42.871 "name": null, 00:13:42.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.871 "is_configured": false, 00:13:42.871 "data_offset": 2048, 00:13:42.871 "data_size": 63488 00:13:42.871 }, 00:13:42.871 { 00:13:42.871 "name": "BaseBdev3", 00:13:42.871 "uuid": "0a529eee-a77c-539a-9e05-eff70cb822fc", 00:13:42.871 "is_configured": true, 00:13:42.871 "data_offset": 2048, 00:13:42.871 "data_size": 63488 00:13:42.871 }, 00:13:42.871 { 00:13:42.871 "name": "BaseBdev4", 00:13:42.871 "uuid": "163e91dd-9cd4-5b39-97c7-02414b5239c6", 00:13:42.871 "is_configured": true, 00:13:42.871 "data_offset": 2048, 00:13:42.871 "data_size": 63488 00:13:42.871 } 00:13:42.871 ] 00:13:42.871 }' 00:13:42.871 12:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.871 12:33:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.129 12:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:43.129 12:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.129 12:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:43.129 12:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:43.129 12:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.129 12:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.129 12:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.129 12:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.129 12:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.129 12:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.129 12:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.129 "name": "raid_bdev1", 00:13:43.129 "uuid": "4cf0ea6b-9e15-42b3-9ae3-db8d45ec7ec6", 00:13:43.129 "strip_size_kb": 0, 00:13:43.129 "state": "online", 00:13:43.129 "raid_level": "raid1", 00:13:43.129 "superblock": true, 00:13:43.129 "num_base_bdevs": 4, 00:13:43.129 "num_base_bdevs_discovered": 2, 00:13:43.129 "num_base_bdevs_operational": 2, 00:13:43.129 "base_bdevs_list": [ 00:13:43.129 { 00:13:43.129 "name": null, 00:13:43.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.129 "is_configured": false, 00:13:43.129 "data_offset": 0, 00:13:43.129 "data_size": 63488 00:13:43.129 }, 00:13:43.129 { 00:13:43.129 "name": null, 00:13:43.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.129 "is_configured": false, 00:13:43.129 "data_offset": 2048, 00:13:43.129 "data_size": 63488 00:13:43.129 }, 00:13:43.129 { 00:13:43.129 "name": "BaseBdev3", 00:13:43.129 "uuid": "0a529eee-a77c-539a-9e05-eff70cb822fc", 00:13:43.129 "is_configured": true, 00:13:43.129 "data_offset": 2048, 00:13:43.129 "data_size": 63488 00:13:43.129 }, 00:13:43.129 { 00:13:43.129 "name": "BaseBdev4", 00:13:43.129 "uuid": "163e91dd-9cd4-5b39-97c7-02414b5239c6", 00:13:43.129 "is_configured": true, 00:13:43.129 "data_offset": 2048, 00:13:43.129 "data_size": 63488 00:13:43.130 } 00:13:43.130 ] 00:13:43.130 }' 00:13:43.130 12:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.130 12:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:43.130 12:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.388 12:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:43.388 12:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:43.388 12:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:13:43.388 12:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:43.388 12:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:43.388 12:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.388 12:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:43.388 12:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.388 12:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:43.388 12:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.388 12:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.388 [2024-11-19 12:33:48.446859] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:43.388 [2024-11-19 12:33:48.447077] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:43.388 [2024-11-19 12:33:48.447149] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:43.388 request: 00:13:43.388 { 00:13:43.388 "base_bdev": "BaseBdev1", 00:13:43.388 "raid_bdev": "raid_bdev1", 00:13:43.388 "method": "bdev_raid_add_base_bdev", 00:13:43.388 "req_id": 1 00:13:43.388 } 00:13:43.388 Got JSON-RPC error response 00:13:43.388 response: 00:13:43.388 { 00:13:43.388 "code": -22, 00:13:43.388 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:43.388 } 00:13:43.388 12:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:43.388 12:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:13:43.388 12:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:43.388 12:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:43.388 12:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:43.388 12:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:44.325 12:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:44.325 12:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.325 12:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.325 12:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.325 12:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.325 12:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:44.325 12:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.325 12:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.325 12:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.325 12:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.325 12:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.325 12:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.325 12:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.325 12:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.325 12:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.325 12:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.325 "name": "raid_bdev1", 00:13:44.325 "uuid": "4cf0ea6b-9e15-42b3-9ae3-db8d45ec7ec6", 00:13:44.325 "strip_size_kb": 0, 00:13:44.325 "state": "online", 00:13:44.325 "raid_level": "raid1", 00:13:44.325 "superblock": true, 00:13:44.325 "num_base_bdevs": 4, 00:13:44.325 "num_base_bdevs_discovered": 2, 00:13:44.325 "num_base_bdevs_operational": 2, 00:13:44.325 "base_bdevs_list": [ 00:13:44.325 { 00:13:44.325 "name": null, 00:13:44.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.325 "is_configured": false, 00:13:44.325 "data_offset": 0, 00:13:44.325 "data_size": 63488 00:13:44.325 }, 00:13:44.325 { 00:13:44.325 "name": null, 00:13:44.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.325 "is_configured": false, 00:13:44.325 "data_offset": 2048, 00:13:44.325 "data_size": 63488 00:13:44.325 }, 00:13:44.325 { 00:13:44.325 "name": "BaseBdev3", 00:13:44.325 "uuid": "0a529eee-a77c-539a-9e05-eff70cb822fc", 00:13:44.325 "is_configured": true, 00:13:44.325 "data_offset": 2048, 00:13:44.325 "data_size": 63488 00:13:44.325 }, 00:13:44.325 { 00:13:44.325 "name": "BaseBdev4", 00:13:44.325 "uuid": "163e91dd-9cd4-5b39-97c7-02414b5239c6", 00:13:44.325 "is_configured": true, 00:13:44.325 "data_offset": 2048, 00:13:44.325 "data_size": 63488 00:13:44.325 } 00:13:44.325 ] 00:13:44.325 }' 00:13:44.325 12:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.325 12:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.893 12:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:44.893 12:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.893 12:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:44.893 12:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:44.893 12:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.893 12:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.893 12:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.893 12:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.893 12:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.893 12:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.893 12:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.893 "name": "raid_bdev1", 00:13:44.893 "uuid": "4cf0ea6b-9e15-42b3-9ae3-db8d45ec7ec6", 00:13:44.893 "strip_size_kb": 0, 00:13:44.893 "state": "online", 00:13:44.893 "raid_level": "raid1", 00:13:44.893 "superblock": true, 00:13:44.893 "num_base_bdevs": 4, 00:13:44.893 "num_base_bdevs_discovered": 2, 00:13:44.894 "num_base_bdevs_operational": 2, 00:13:44.894 "base_bdevs_list": [ 00:13:44.894 { 00:13:44.894 "name": null, 00:13:44.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.894 "is_configured": false, 00:13:44.894 "data_offset": 0, 00:13:44.894 "data_size": 63488 00:13:44.894 }, 00:13:44.894 { 00:13:44.894 "name": null, 00:13:44.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.894 "is_configured": false, 00:13:44.894 "data_offset": 2048, 00:13:44.894 "data_size": 63488 00:13:44.894 }, 00:13:44.894 { 00:13:44.894 "name": "BaseBdev3", 00:13:44.894 "uuid": "0a529eee-a77c-539a-9e05-eff70cb822fc", 00:13:44.894 "is_configured": true, 00:13:44.894 "data_offset": 2048, 00:13:44.894 "data_size": 63488 00:13:44.894 }, 00:13:44.894 { 00:13:44.894 "name": "BaseBdev4", 00:13:44.894 "uuid": "163e91dd-9cd4-5b39-97c7-02414b5239c6", 00:13:44.894 "is_configured": true, 00:13:44.894 "data_offset": 2048, 00:13:44.894 "data_size": 63488 00:13:44.894 } 00:13:44.894 ] 00:13:44.894 }' 00:13:44.894 12:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.894 12:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:44.894 12:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.894 12:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:44.894 12:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 89957 00:13:44.894 12:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 89957 ']' 00:13:44.894 12:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 89957 00:13:44.894 12:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:13:44.894 12:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:44.894 12:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89957 00:13:44.894 12:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:44.894 12:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:44.894 12:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89957' 00:13:44.894 killing process with pid 89957 00:13:44.894 Received shutdown signal, test time was about 17.909689 seconds 00:13:44.894 00:13:44.894 Latency(us) 00:13:44.894 [2024-11-19T12:33:50.155Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.894 [2024-11-19T12:33:50.155Z] =================================================================================================================== 00:13:44.894 [2024-11-19T12:33:50.155Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:44.894 12:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 89957 00:13:44.894 [2024-11-19 12:33:50.096228] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:44.894 [2024-11-19 12:33:50.096356] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:44.894 12:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 89957 00:13:44.894 [2024-11-19 12:33:50.096426] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:44.894 [2024-11-19 12:33:50.096435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:13:44.894 [2024-11-19 12:33:50.141620] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:45.154 12:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:45.154 00:13:45.154 real 0m19.855s 00:13:45.154 user 0m26.330s 00:13:45.154 sys 0m2.678s 00:13:45.154 12:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:45.154 ************************************ 00:13:45.154 END TEST raid_rebuild_test_sb_io 00:13:45.154 ************************************ 00:13:45.154 12:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.414 12:33:50 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:13:45.414 12:33:50 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:13:45.414 12:33:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:45.414 12:33:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:45.414 12:33:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:45.414 ************************************ 00:13:45.414 START TEST raid5f_state_function_test 00:13:45.414 ************************************ 00:13:45.414 12:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:13:45.414 12:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:45.414 12:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:45.414 12:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:45.414 12:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:45.414 12:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:45.414 12:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:45.414 12:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:45.414 12:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:45.414 12:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:45.414 12:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:45.414 12:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:45.414 12:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:45.414 12:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:45.414 12:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:45.414 12:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:45.414 12:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:45.414 12:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:45.414 12:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:45.414 12:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:45.414 12:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:45.414 12:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:45.414 12:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:45.414 12:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:45.414 12:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:45.414 12:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:45.415 12:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:45.415 12:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=90668 00:13:45.415 12:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:45.415 12:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90668' 00:13:45.415 Process raid pid: 90668 00:13:45.415 12:33:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 90668 00:13:45.415 12:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 90668 ']' 00:13:45.415 12:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.415 12:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:45.415 12:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.415 12:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:45.415 12:33:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.415 [2024-11-19 12:33:50.559435] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:45.415 [2024-11-19 12:33:50.559663] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.674 [2024-11-19 12:33:50.727212] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.674 [2024-11-19 12:33:50.773877] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.674 [2024-11-19 12:33:50.816045] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.674 [2024-11-19 12:33:50.816194] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:46.241 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:46.241 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:13:46.241 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:46.241 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.241 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.241 [2024-11-19 12:33:51.389296] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:46.241 [2024-11-19 12:33:51.389413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:46.241 [2024-11-19 12:33:51.389462] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:46.241 [2024-11-19 12:33:51.389474] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:46.241 [2024-11-19 12:33:51.389480] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:46.241 [2024-11-19 12:33:51.389491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:46.241 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.241 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:46.241 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.241 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.241 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:46.241 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.241 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:46.241 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.241 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.241 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.241 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.241 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.241 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.241 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.241 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.241 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.241 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.241 "name": "Existed_Raid", 00:13:46.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.241 "strip_size_kb": 64, 00:13:46.241 "state": "configuring", 00:13:46.241 "raid_level": "raid5f", 00:13:46.241 "superblock": false, 00:13:46.241 "num_base_bdevs": 3, 00:13:46.241 "num_base_bdevs_discovered": 0, 00:13:46.241 "num_base_bdevs_operational": 3, 00:13:46.241 "base_bdevs_list": [ 00:13:46.241 { 00:13:46.241 "name": "BaseBdev1", 00:13:46.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.241 "is_configured": false, 00:13:46.241 "data_offset": 0, 00:13:46.241 "data_size": 0 00:13:46.241 }, 00:13:46.241 { 00:13:46.241 "name": "BaseBdev2", 00:13:46.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.241 "is_configured": false, 00:13:46.241 "data_offset": 0, 00:13:46.241 "data_size": 0 00:13:46.241 }, 00:13:46.241 { 00:13:46.241 "name": "BaseBdev3", 00:13:46.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.241 "is_configured": false, 00:13:46.241 "data_offset": 0, 00:13:46.241 "data_size": 0 00:13:46.241 } 00:13:46.241 ] 00:13:46.241 }' 00:13:46.241 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.241 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.808 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:46.808 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.808 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.808 [2024-11-19 12:33:51.824447] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:46.808 [2024-11-19 12:33:51.824540] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:13:46.808 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.808 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:46.808 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.808 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.808 [2024-11-19 12:33:51.836458] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:46.808 [2024-11-19 12:33:51.836541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:46.809 [2024-11-19 12:33:51.836597] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:46.809 [2024-11-19 12:33:51.836644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:46.809 [2024-11-19 12:33:51.836677] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:46.809 [2024-11-19 12:33:51.836728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.809 [2024-11-19 12:33:51.857096] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:46.809 BaseBdev1 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.809 [ 00:13:46.809 { 00:13:46.809 "name": "BaseBdev1", 00:13:46.809 "aliases": [ 00:13:46.809 "2f70d872-a8f4-46a0-8296-cdd326695910" 00:13:46.809 ], 00:13:46.809 "product_name": "Malloc disk", 00:13:46.809 "block_size": 512, 00:13:46.809 "num_blocks": 65536, 00:13:46.809 "uuid": "2f70d872-a8f4-46a0-8296-cdd326695910", 00:13:46.809 "assigned_rate_limits": { 00:13:46.809 "rw_ios_per_sec": 0, 00:13:46.809 "rw_mbytes_per_sec": 0, 00:13:46.809 "r_mbytes_per_sec": 0, 00:13:46.809 "w_mbytes_per_sec": 0 00:13:46.809 }, 00:13:46.809 "claimed": true, 00:13:46.809 "claim_type": "exclusive_write", 00:13:46.809 "zoned": false, 00:13:46.809 "supported_io_types": { 00:13:46.809 "read": true, 00:13:46.809 "write": true, 00:13:46.809 "unmap": true, 00:13:46.809 "flush": true, 00:13:46.809 "reset": true, 00:13:46.809 "nvme_admin": false, 00:13:46.809 "nvme_io": false, 00:13:46.809 "nvme_io_md": false, 00:13:46.809 "write_zeroes": true, 00:13:46.809 "zcopy": true, 00:13:46.809 "get_zone_info": false, 00:13:46.809 "zone_management": false, 00:13:46.809 "zone_append": false, 00:13:46.809 "compare": false, 00:13:46.809 "compare_and_write": false, 00:13:46.809 "abort": true, 00:13:46.809 "seek_hole": false, 00:13:46.809 "seek_data": false, 00:13:46.809 "copy": true, 00:13:46.809 "nvme_iov_md": false 00:13:46.809 }, 00:13:46.809 "memory_domains": [ 00:13:46.809 { 00:13:46.809 "dma_device_id": "system", 00:13:46.809 "dma_device_type": 1 00:13:46.809 }, 00:13:46.809 { 00:13:46.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.809 "dma_device_type": 2 00:13:46.809 } 00:13:46.809 ], 00:13:46.809 "driver_specific": {} 00:13:46.809 } 00:13:46.809 ] 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.809 "name": "Existed_Raid", 00:13:46.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.809 "strip_size_kb": 64, 00:13:46.809 "state": "configuring", 00:13:46.809 "raid_level": "raid5f", 00:13:46.809 "superblock": false, 00:13:46.809 "num_base_bdevs": 3, 00:13:46.809 "num_base_bdevs_discovered": 1, 00:13:46.809 "num_base_bdevs_operational": 3, 00:13:46.809 "base_bdevs_list": [ 00:13:46.809 { 00:13:46.809 "name": "BaseBdev1", 00:13:46.809 "uuid": "2f70d872-a8f4-46a0-8296-cdd326695910", 00:13:46.809 "is_configured": true, 00:13:46.809 "data_offset": 0, 00:13:46.809 "data_size": 65536 00:13:46.809 }, 00:13:46.809 { 00:13:46.809 "name": "BaseBdev2", 00:13:46.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.809 "is_configured": false, 00:13:46.809 "data_offset": 0, 00:13:46.809 "data_size": 0 00:13:46.809 }, 00:13:46.809 { 00:13:46.809 "name": "BaseBdev3", 00:13:46.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.809 "is_configured": false, 00:13:46.809 "data_offset": 0, 00:13:46.809 "data_size": 0 00:13:46.809 } 00:13:46.809 ] 00:13:46.809 }' 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.809 12:33:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.069 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:47.069 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.069 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.069 [2024-11-19 12:33:52.308365] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:47.069 [2024-11-19 12:33:52.308483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:13:47.069 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.069 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:47.069 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.069 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.069 [2024-11-19 12:33:52.320384] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:47.069 [2024-11-19 12:33:52.322257] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:47.069 [2024-11-19 12:33:52.322337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:47.069 [2024-11-19 12:33:52.322394] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:47.069 [2024-11-19 12:33:52.322442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:47.069 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.069 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:47.069 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:47.069 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:47.328 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.328 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.328 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:47.328 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.328 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.328 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.328 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.328 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.328 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.328 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.328 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.328 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.328 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.328 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.328 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.328 "name": "Existed_Raid", 00:13:47.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.328 "strip_size_kb": 64, 00:13:47.328 "state": "configuring", 00:13:47.328 "raid_level": "raid5f", 00:13:47.328 "superblock": false, 00:13:47.328 "num_base_bdevs": 3, 00:13:47.328 "num_base_bdevs_discovered": 1, 00:13:47.328 "num_base_bdevs_operational": 3, 00:13:47.328 "base_bdevs_list": [ 00:13:47.328 { 00:13:47.328 "name": "BaseBdev1", 00:13:47.328 "uuid": "2f70d872-a8f4-46a0-8296-cdd326695910", 00:13:47.328 "is_configured": true, 00:13:47.328 "data_offset": 0, 00:13:47.328 "data_size": 65536 00:13:47.328 }, 00:13:47.328 { 00:13:47.328 "name": "BaseBdev2", 00:13:47.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.328 "is_configured": false, 00:13:47.328 "data_offset": 0, 00:13:47.328 "data_size": 0 00:13:47.328 }, 00:13:47.328 { 00:13:47.328 "name": "BaseBdev3", 00:13:47.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.328 "is_configured": false, 00:13:47.328 "data_offset": 0, 00:13:47.328 "data_size": 0 00:13:47.328 } 00:13:47.328 ] 00:13:47.328 }' 00:13:47.328 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.328 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.587 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:47.587 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.587 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.587 [2024-11-19 12:33:52.814450] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:47.587 BaseBdev2 00:13:47.587 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.587 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:47.587 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:47.587 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:47.587 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:47.587 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:47.587 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:47.587 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:47.587 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.587 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.587 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.587 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:47.587 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.587 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.587 [ 00:13:47.587 { 00:13:47.587 "name": "BaseBdev2", 00:13:47.587 "aliases": [ 00:13:47.587 "455c5793-1d74-434c-b177-7beaa386bb54" 00:13:47.587 ], 00:13:47.587 "product_name": "Malloc disk", 00:13:47.587 "block_size": 512, 00:13:47.587 "num_blocks": 65536, 00:13:47.587 "uuid": "455c5793-1d74-434c-b177-7beaa386bb54", 00:13:47.587 "assigned_rate_limits": { 00:13:47.587 "rw_ios_per_sec": 0, 00:13:47.587 "rw_mbytes_per_sec": 0, 00:13:47.587 "r_mbytes_per_sec": 0, 00:13:47.587 "w_mbytes_per_sec": 0 00:13:47.587 }, 00:13:47.587 "claimed": true, 00:13:47.587 "claim_type": "exclusive_write", 00:13:47.587 "zoned": false, 00:13:47.587 "supported_io_types": { 00:13:47.587 "read": true, 00:13:47.587 "write": true, 00:13:47.587 "unmap": true, 00:13:47.587 "flush": true, 00:13:47.845 "reset": true, 00:13:47.845 "nvme_admin": false, 00:13:47.845 "nvme_io": false, 00:13:47.845 "nvme_io_md": false, 00:13:47.845 "write_zeroes": true, 00:13:47.845 "zcopy": true, 00:13:47.845 "get_zone_info": false, 00:13:47.845 "zone_management": false, 00:13:47.845 "zone_append": false, 00:13:47.845 "compare": false, 00:13:47.845 "compare_and_write": false, 00:13:47.845 "abort": true, 00:13:47.845 "seek_hole": false, 00:13:47.845 "seek_data": false, 00:13:47.845 "copy": true, 00:13:47.845 "nvme_iov_md": false 00:13:47.845 }, 00:13:47.845 "memory_domains": [ 00:13:47.845 { 00:13:47.846 "dma_device_id": "system", 00:13:47.846 "dma_device_type": 1 00:13:47.846 }, 00:13:47.846 { 00:13:47.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.846 "dma_device_type": 2 00:13:47.846 } 00:13:47.846 ], 00:13:47.846 "driver_specific": {} 00:13:47.846 } 00:13:47.846 ] 00:13:47.846 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.846 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:47.846 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:47.846 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:47.846 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:47.846 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.846 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.846 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:47.846 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.846 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.846 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.846 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.846 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.846 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.846 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.846 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.846 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.846 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.846 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.846 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.846 "name": "Existed_Raid", 00:13:47.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.846 "strip_size_kb": 64, 00:13:47.846 "state": "configuring", 00:13:47.846 "raid_level": "raid5f", 00:13:47.846 "superblock": false, 00:13:47.846 "num_base_bdevs": 3, 00:13:47.846 "num_base_bdevs_discovered": 2, 00:13:47.846 "num_base_bdevs_operational": 3, 00:13:47.846 "base_bdevs_list": [ 00:13:47.846 { 00:13:47.846 "name": "BaseBdev1", 00:13:47.846 "uuid": "2f70d872-a8f4-46a0-8296-cdd326695910", 00:13:47.846 "is_configured": true, 00:13:47.846 "data_offset": 0, 00:13:47.846 "data_size": 65536 00:13:47.846 }, 00:13:47.846 { 00:13:47.846 "name": "BaseBdev2", 00:13:47.846 "uuid": "455c5793-1d74-434c-b177-7beaa386bb54", 00:13:47.846 "is_configured": true, 00:13:47.846 "data_offset": 0, 00:13:47.846 "data_size": 65536 00:13:47.846 }, 00:13:47.846 { 00:13:47.846 "name": "BaseBdev3", 00:13:47.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.846 "is_configured": false, 00:13:47.846 "data_offset": 0, 00:13:47.846 "data_size": 0 00:13:47.846 } 00:13:47.846 ] 00:13:47.846 }' 00:13:47.846 12:33:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.846 12:33:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.104 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:48.104 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.104 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.104 [2024-11-19 12:33:53.288540] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:48.104 [2024-11-19 12:33:53.288670] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:48.104 [2024-11-19 12:33:53.288704] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:48.104 [2024-11-19 12:33:53.289037] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:48.104 [2024-11-19 12:33:53.289530] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:48.104 [2024-11-19 12:33:53.289584] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:13:48.104 [2024-11-19 12:33:53.289840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.104 BaseBdev3 00:13:48.104 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.104 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:48.104 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:48.104 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:48.104 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:48.104 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:48.104 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:48.104 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:48.104 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.104 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.104 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.104 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:48.104 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.104 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.104 [ 00:13:48.104 { 00:13:48.104 "name": "BaseBdev3", 00:13:48.104 "aliases": [ 00:13:48.104 "543012de-e781-4251-bd4a-3b07ce470f11" 00:13:48.104 ], 00:13:48.104 "product_name": "Malloc disk", 00:13:48.104 "block_size": 512, 00:13:48.104 "num_blocks": 65536, 00:13:48.104 "uuid": "543012de-e781-4251-bd4a-3b07ce470f11", 00:13:48.104 "assigned_rate_limits": { 00:13:48.104 "rw_ios_per_sec": 0, 00:13:48.104 "rw_mbytes_per_sec": 0, 00:13:48.104 "r_mbytes_per_sec": 0, 00:13:48.104 "w_mbytes_per_sec": 0 00:13:48.104 }, 00:13:48.104 "claimed": true, 00:13:48.104 "claim_type": "exclusive_write", 00:13:48.104 "zoned": false, 00:13:48.105 "supported_io_types": { 00:13:48.105 "read": true, 00:13:48.105 "write": true, 00:13:48.105 "unmap": true, 00:13:48.105 "flush": true, 00:13:48.105 "reset": true, 00:13:48.105 "nvme_admin": false, 00:13:48.105 "nvme_io": false, 00:13:48.105 "nvme_io_md": false, 00:13:48.105 "write_zeroes": true, 00:13:48.105 "zcopy": true, 00:13:48.105 "get_zone_info": false, 00:13:48.105 "zone_management": false, 00:13:48.105 "zone_append": false, 00:13:48.105 "compare": false, 00:13:48.105 "compare_and_write": false, 00:13:48.105 "abort": true, 00:13:48.105 "seek_hole": false, 00:13:48.105 "seek_data": false, 00:13:48.105 "copy": true, 00:13:48.105 "nvme_iov_md": false 00:13:48.105 }, 00:13:48.105 "memory_domains": [ 00:13:48.105 { 00:13:48.105 "dma_device_id": "system", 00:13:48.105 "dma_device_type": 1 00:13:48.105 }, 00:13:48.105 { 00:13:48.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.105 "dma_device_type": 2 00:13:48.105 } 00:13:48.105 ], 00:13:48.105 "driver_specific": {} 00:13:48.105 } 00:13:48.105 ] 00:13:48.105 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.105 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:48.105 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:48.105 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:48.105 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:48.105 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.105 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.105 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:48.105 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.105 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:48.105 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.105 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.105 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.105 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.105 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.105 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.105 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.105 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.105 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.363 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.363 "name": "Existed_Raid", 00:13:48.363 "uuid": "1fc1e501-4262-445e-a68e-ff2f2df63d5b", 00:13:48.363 "strip_size_kb": 64, 00:13:48.363 "state": "online", 00:13:48.363 "raid_level": "raid5f", 00:13:48.363 "superblock": false, 00:13:48.363 "num_base_bdevs": 3, 00:13:48.363 "num_base_bdevs_discovered": 3, 00:13:48.363 "num_base_bdevs_operational": 3, 00:13:48.363 "base_bdevs_list": [ 00:13:48.363 { 00:13:48.363 "name": "BaseBdev1", 00:13:48.363 "uuid": "2f70d872-a8f4-46a0-8296-cdd326695910", 00:13:48.363 "is_configured": true, 00:13:48.363 "data_offset": 0, 00:13:48.363 "data_size": 65536 00:13:48.363 }, 00:13:48.363 { 00:13:48.363 "name": "BaseBdev2", 00:13:48.363 "uuid": "455c5793-1d74-434c-b177-7beaa386bb54", 00:13:48.363 "is_configured": true, 00:13:48.363 "data_offset": 0, 00:13:48.363 "data_size": 65536 00:13:48.363 }, 00:13:48.363 { 00:13:48.363 "name": "BaseBdev3", 00:13:48.363 "uuid": "543012de-e781-4251-bd4a-3b07ce470f11", 00:13:48.363 "is_configured": true, 00:13:48.363 "data_offset": 0, 00:13:48.363 "data_size": 65536 00:13:48.363 } 00:13:48.363 ] 00:13:48.363 }' 00:13:48.363 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.363 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.622 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:48.622 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:48.622 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:48.622 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:48.622 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:48.622 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:48.622 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:48.622 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:48.622 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.622 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.622 [2024-11-19 12:33:53.740013] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:48.622 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.622 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:48.622 "name": "Existed_Raid", 00:13:48.622 "aliases": [ 00:13:48.622 "1fc1e501-4262-445e-a68e-ff2f2df63d5b" 00:13:48.622 ], 00:13:48.622 "product_name": "Raid Volume", 00:13:48.622 "block_size": 512, 00:13:48.622 "num_blocks": 131072, 00:13:48.622 "uuid": "1fc1e501-4262-445e-a68e-ff2f2df63d5b", 00:13:48.622 "assigned_rate_limits": { 00:13:48.622 "rw_ios_per_sec": 0, 00:13:48.622 "rw_mbytes_per_sec": 0, 00:13:48.622 "r_mbytes_per_sec": 0, 00:13:48.622 "w_mbytes_per_sec": 0 00:13:48.622 }, 00:13:48.622 "claimed": false, 00:13:48.622 "zoned": false, 00:13:48.622 "supported_io_types": { 00:13:48.622 "read": true, 00:13:48.622 "write": true, 00:13:48.622 "unmap": false, 00:13:48.622 "flush": false, 00:13:48.622 "reset": true, 00:13:48.622 "nvme_admin": false, 00:13:48.622 "nvme_io": false, 00:13:48.622 "nvme_io_md": false, 00:13:48.622 "write_zeroes": true, 00:13:48.622 "zcopy": false, 00:13:48.622 "get_zone_info": false, 00:13:48.622 "zone_management": false, 00:13:48.622 "zone_append": false, 00:13:48.622 "compare": false, 00:13:48.622 "compare_and_write": false, 00:13:48.622 "abort": false, 00:13:48.622 "seek_hole": false, 00:13:48.622 "seek_data": false, 00:13:48.622 "copy": false, 00:13:48.622 "nvme_iov_md": false 00:13:48.622 }, 00:13:48.622 "driver_specific": { 00:13:48.622 "raid": { 00:13:48.622 "uuid": "1fc1e501-4262-445e-a68e-ff2f2df63d5b", 00:13:48.622 "strip_size_kb": 64, 00:13:48.622 "state": "online", 00:13:48.622 "raid_level": "raid5f", 00:13:48.622 "superblock": false, 00:13:48.622 "num_base_bdevs": 3, 00:13:48.622 "num_base_bdevs_discovered": 3, 00:13:48.622 "num_base_bdevs_operational": 3, 00:13:48.622 "base_bdevs_list": [ 00:13:48.622 { 00:13:48.622 "name": "BaseBdev1", 00:13:48.622 "uuid": "2f70d872-a8f4-46a0-8296-cdd326695910", 00:13:48.622 "is_configured": true, 00:13:48.622 "data_offset": 0, 00:13:48.622 "data_size": 65536 00:13:48.622 }, 00:13:48.622 { 00:13:48.622 "name": "BaseBdev2", 00:13:48.622 "uuid": "455c5793-1d74-434c-b177-7beaa386bb54", 00:13:48.622 "is_configured": true, 00:13:48.622 "data_offset": 0, 00:13:48.622 "data_size": 65536 00:13:48.622 }, 00:13:48.622 { 00:13:48.622 "name": "BaseBdev3", 00:13:48.622 "uuid": "543012de-e781-4251-bd4a-3b07ce470f11", 00:13:48.622 "is_configured": true, 00:13:48.622 "data_offset": 0, 00:13:48.622 "data_size": 65536 00:13:48.622 } 00:13:48.622 ] 00:13:48.622 } 00:13:48.622 } 00:13:48.622 }' 00:13:48.622 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:48.622 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:48.622 BaseBdev2 00:13:48.622 BaseBdev3' 00:13:48.622 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:48.622 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:48.622 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:48.622 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:48.622 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.622 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.622 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:48.622 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.622 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:48.622 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:48.622 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:48.881 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:48.881 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:48.881 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.881 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.881 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.881 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:48.881 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:48.881 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:48.881 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:48.881 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:48.881 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.881 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.881 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.881 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:48.881 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:48.881 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:48.881 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.881 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.881 [2024-11-19 12:33:53.979482] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:48.881 12:33:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.881 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:48.881 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:48.881 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:48.881 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:48.881 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:48.881 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:48.882 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.882 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.882 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:48.882 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.882 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:48.882 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.882 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.882 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.882 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.882 12:33:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.882 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.882 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.882 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.882 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.882 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.882 "name": "Existed_Raid", 00:13:48.882 "uuid": "1fc1e501-4262-445e-a68e-ff2f2df63d5b", 00:13:48.882 "strip_size_kb": 64, 00:13:48.882 "state": "online", 00:13:48.882 "raid_level": "raid5f", 00:13:48.882 "superblock": false, 00:13:48.882 "num_base_bdevs": 3, 00:13:48.882 "num_base_bdevs_discovered": 2, 00:13:48.882 "num_base_bdevs_operational": 2, 00:13:48.882 "base_bdevs_list": [ 00:13:48.882 { 00:13:48.882 "name": null, 00:13:48.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.882 "is_configured": false, 00:13:48.882 "data_offset": 0, 00:13:48.882 "data_size": 65536 00:13:48.882 }, 00:13:48.882 { 00:13:48.882 "name": "BaseBdev2", 00:13:48.882 "uuid": "455c5793-1d74-434c-b177-7beaa386bb54", 00:13:48.882 "is_configured": true, 00:13:48.882 "data_offset": 0, 00:13:48.882 "data_size": 65536 00:13:48.882 }, 00:13:48.882 { 00:13:48.882 "name": "BaseBdev3", 00:13:48.882 "uuid": "543012de-e781-4251-bd4a-3b07ce470f11", 00:13:48.882 "is_configured": true, 00:13:48.882 "data_offset": 0, 00:13:48.882 "data_size": 65536 00:13:48.882 } 00:13:48.882 ] 00:13:48.882 }' 00:13:48.882 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.882 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.449 [2024-11-19 12:33:54.541878] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:49.449 [2024-11-19 12:33:54.542033] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:49.449 [2024-11-19 12:33:54.553385] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.449 [2024-11-19 12:33:54.609327] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:49.449 [2024-11-19 12:33:54.609427] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.449 BaseBdev2 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.449 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.709 [ 00:13:49.709 { 00:13:49.709 "name": "BaseBdev2", 00:13:49.709 "aliases": [ 00:13:49.709 "f625e7b5-28fd-4865-8ff7-f8ab05c9c5a5" 00:13:49.709 ], 00:13:49.709 "product_name": "Malloc disk", 00:13:49.709 "block_size": 512, 00:13:49.709 "num_blocks": 65536, 00:13:49.709 "uuid": "f625e7b5-28fd-4865-8ff7-f8ab05c9c5a5", 00:13:49.709 "assigned_rate_limits": { 00:13:49.709 "rw_ios_per_sec": 0, 00:13:49.709 "rw_mbytes_per_sec": 0, 00:13:49.709 "r_mbytes_per_sec": 0, 00:13:49.709 "w_mbytes_per_sec": 0 00:13:49.709 }, 00:13:49.709 "claimed": false, 00:13:49.709 "zoned": false, 00:13:49.709 "supported_io_types": { 00:13:49.709 "read": true, 00:13:49.709 "write": true, 00:13:49.709 "unmap": true, 00:13:49.709 "flush": true, 00:13:49.709 "reset": true, 00:13:49.709 "nvme_admin": false, 00:13:49.709 "nvme_io": false, 00:13:49.709 "nvme_io_md": false, 00:13:49.709 "write_zeroes": true, 00:13:49.709 "zcopy": true, 00:13:49.709 "get_zone_info": false, 00:13:49.709 "zone_management": false, 00:13:49.709 "zone_append": false, 00:13:49.709 "compare": false, 00:13:49.709 "compare_and_write": false, 00:13:49.709 "abort": true, 00:13:49.709 "seek_hole": false, 00:13:49.709 "seek_data": false, 00:13:49.709 "copy": true, 00:13:49.709 "nvme_iov_md": false 00:13:49.709 }, 00:13:49.709 "memory_domains": [ 00:13:49.709 { 00:13:49.709 "dma_device_id": "system", 00:13:49.709 "dma_device_type": 1 00:13:49.709 }, 00:13:49.709 { 00:13:49.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.709 "dma_device_type": 2 00:13:49.709 } 00:13:49.709 ], 00:13:49.709 "driver_specific": {} 00:13:49.709 } 00:13:49.709 ] 00:13:49.709 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.709 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:49.709 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:49.709 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:49.709 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:49.709 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.709 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.709 BaseBdev3 00:13:49.709 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.709 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:49.709 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:49.709 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:49.709 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:49.709 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:49.709 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:49.709 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:49.709 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.709 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.709 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.709 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:49.709 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.709 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.709 [ 00:13:49.709 { 00:13:49.709 "name": "BaseBdev3", 00:13:49.709 "aliases": [ 00:13:49.709 "f883d5d4-3e5c-427b-8fd0-d592c2f2ec93" 00:13:49.709 ], 00:13:49.709 "product_name": "Malloc disk", 00:13:49.709 "block_size": 512, 00:13:49.709 "num_blocks": 65536, 00:13:49.709 "uuid": "f883d5d4-3e5c-427b-8fd0-d592c2f2ec93", 00:13:49.709 "assigned_rate_limits": { 00:13:49.709 "rw_ios_per_sec": 0, 00:13:49.709 "rw_mbytes_per_sec": 0, 00:13:49.709 "r_mbytes_per_sec": 0, 00:13:49.709 "w_mbytes_per_sec": 0 00:13:49.709 }, 00:13:49.709 "claimed": false, 00:13:49.709 "zoned": false, 00:13:49.709 "supported_io_types": { 00:13:49.709 "read": true, 00:13:49.709 "write": true, 00:13:49.709 "unmap": true, 00:13:49.709 "flush": true, 00:13:49.709 "reset": true, 00:13:49.709 "nvme_admin": false, 00:13:49.709 "nvme_io": false, 00:13:49.709 "nvme_io_md": false, 00:13:49.709 "write_zeroes": true, 00:13:49.709 "zcopy": true, 00:13:49.709 "get_zone_info": false, 00:13:49.709 "zone_management": false, 00:13:49.709 "zone_append": false, 00:13:49.709 "compare": false, 00:13:49.709 "compare_and_write": false, 00:13:49.709 "abort": true, 00:13:49.709 "seek_hole": false, 00:13:49.709 "seek_data": false, 00:13:49.709 "copy": true, 00:13:49.709 "nvme_iov_md": false 00:13:49.709 }, 00:13:49.709 "memory_domains": [ 00:13:49.709 { 00:13:49.709 "dma_device_id": "system", 00:13:49.709 "dma_device_type": 1 00:13:49.709 }, 00:13:49.709 { 00:13:49.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.709 "dma_device_type": 2 00:13:49.709 } 00:13:49.709 ], 00:13:49.709 "driver_specific": {} 00:13:49.709 } 00:13:49.709 ] 00:13:49.709 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.709 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:49.709 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:49.709 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:49.709 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:49.709 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.710 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.710 [2024-11-19 12:33:54.785548] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:49.710 [2024-11-19 12:33:54.785646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:49.710 [2024-11-19 12:33:54.785686] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:49.710 [2024-11-19 12:33:54.787547] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:49.710 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.710 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:49.710 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.710 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:49.710 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:49.710 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.710 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:49.710 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.710 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.710 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.710 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.710 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.710 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.710 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.710 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.710 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.710 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.710 "name": "Existed_Raid", 00:13:49.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.710 "strip_size_kb": 64, 00:13:49.710 "state": "configuring", 00:13:49.710 "raid_level": "raid5f", 00:13:49.710 "superblock": false, 00:13:49.710 "num_base_bdevs": 3, 00:13:49.710 "num_base_bdevs_discovered": 2, 00:13:49.710 "num_base_bdevs_operational": 3, 00:13:49.710 "base_bdevs_list": [ 00:13:49.710 { 00:13:49.710 "name": "BaseBdev1", 00:13:49.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.710 "is_configured": false, 00:13:49.710 "data_offset": 0, 00:13:49.710 "data_size": 0 00:13:49.710 }, 00:13:49.710 { 00:13:49.710 "name": "BaseBdev2", 00:13:49.710 "uuid": "f625e7b5-28fd-4865-8ff7-f8ab05c9c5a5", 00:13:49.710 "is_configured": true, 00:13:49.710 "data_offset": 0, 00:13:49.710 "data_size": 65536 00:13:49.710 }, 00:13:49.710 { 00:13:49.710 "name": "BaseBdev3", 00:13:49.710 "uuid": "f883d5d4-3e5c-427b-8fd0-d592c2f2ec93", 00:13:49.710 "is_configured": true, 00:13:49.710 "data_offset": 0, 00:13:49.710 "data_size": 65536 00:13:49.710 } 00:13:49.710 ] 00:13:49.710 }' 00:13:49.710 12:33:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.710 12:33:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.969 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:49.969 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.969 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.969 [2024-11-19 12:33:55.220837] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:50.228 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.228 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:50.228 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.228 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.228 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:50.228 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.228 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:50.228 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.228 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.228 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.228 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.228 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.228 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.228 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.228 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.228 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.228 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.228 "name": "Existed_Raid", 00:13:50.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.228 "strip_size_kb": 64, 00:13:50.228 "state": "configuring", 00:13:50.228 "raid_level": "raid5f", 00:13:50.228 "superblock": false, 00:13:50.228 "num_base_bdevs": 3, 00:13:50.228 "num_base_bdevs_discovered": 1, 00:13:50.228 "num_base_bdevs_operational": 3, 00:13:50.228 "base_bdevs_list": [ 00:13:50.228 { 00:13:50.228 "name": "BaseBdev1", 00:13:50.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.228 "is_configured": false, 00:13:50.228 "data_offset": 0, 00:13:50.228 "data_size": 0 00:13:50.228 }, 00:13:50.228 { 00:13:50.228 "name": null, 00:13:50.228 "uuid": "f625e7b5-28fd-4865-8ff7-f8ab05c9c5a5", 00:13:50.228 "is_configured": false, 00:13:50.228 "data_offset": 0, 00:13:50.228 "data_size": 65536 00:13:50.228 }, 00:13:50.228 { 00:13:50.228 "name": "BaseBdev3", 00:13:50.228 "uuid": "f883d5d4-3e5c-427b-8fd0-d592c2f2ec93", 00:13:50.228 "is_configured": true, 00:13:50.228 "data_offset": 0, 00:13:50.228 "data_size": 65536 00:13:50.228 } 00:13:50.228 ] 00:13:50.228 }' 00:13:50.228 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.228 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.488 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.488 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.488 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.488 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:50.488 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.488 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:50.488 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:50.488 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.488 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.748 [2024-11-19 12:33:55.748830] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:50.748 BaseBdev1 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.748 [ 00:13:50.748 { 00:13:50.748 "name": "BaseBdev1", 00:13:50.748 "aliases": [ 00:13:50.748 "fdec0bd1-b929-403b-b5c2-4e077744c36c" 00:13:50.748 ], 00:13:50.748 "product_name": "Malloc disk", 00:13:50.748 "block_size": 512, 00:13:50.748 "num_blocks": 65536, 00:13:50.748 "uuid": "fdec0bd1-b929-403b-b5c2-4e077744c36c", 00:13:50.748 "assigned_rate_limits": { 00:13:50.748 "rw_ios_per_sec": 0, 00:13:50.748 "rw_mbytes_per_sec": 0, 00:13:50.748 "r_mbytes_per_sec": 0, 00:13:50.748 "w_mbytes_per_sec": 0 00:13:50.748 }, 00:13:50.748 "claimed": true, 00:13:50.748 "claim_type": "exclusive_write", 00:13:50.748 "zoned": false, 00:13:50.748 "supported_io_types": { 00:13:50.748 "read": true, 00:13:50.748 "write": true, 00:13:50.748 "unmap": true, 00:13:50.748 "flush": true, 00:13:50.748 "reset": true, 00:13:50.748 "nvme_admin": false, 00:13:50.748 "nvme_io": false, 00:13:50.748 "nvme_io_md": false, 00:13:50.748 "write_zeroes": true, 00:13:50.748 "zcopy": true, 00:13:50.748 "get_zone_info": false, 00:13:50.748 "zone_management": false, 00:13:50.748 "zone_append": false, 00:13:50.748 "compare": false, 00:13:50.748 "compare_and_write": false, 00:13:50.748 "abort": true, 00:13:50.748 "seek_hole": false, 00:13:50.748 "seek_data": false, 00:13:50.748 "copy": true, 00:13:50.748 "nvme_iov_md": false 00:13:50.748 }, 00:13:50.748 "memory_domains": [ 00:13:50.748 { 00:13:50.748 "dma_device_id": "system", 00:13:50.748 "dma_device_type": 1 00:13:50.748 }, 00:13:50.748 { 00:13:50.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.748 "dma_device_type": 2 00:13:50.748 } 00:13:50.748 ], 00:13:50.748 "driver_specific": {} 00:13:50.748 } 00:13:50.748 ] 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.748 "name": "Existed_Raid", 00:13:50.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.748 "strip_size_kb": 64, 00:13:50.748 "state": "configuring", 00:13:50.748 "raid_level": "raid5f", 00:13:50.748 "superblock": false, 00:13:50.748 "num_base_bdevs": 3, 00:13:50.748 "num_base_bdevs_discovered": 2, 00:13:50.748 "num_base_bdevs_operational": 3, 00:13:50.748 "base_bdevs_list": [ 00:13:50.748 { 00:13:50.748 "name": "BaseBdev1", 00:13:50.748 "uuid": "fdec0bd1-b929-403b-b5c2-4e077744c36c", 00:13:50.748 "is_configured": true, 00:13:50.748 "data_offset": 0, 00:13:50.748 "data_size": 65536 00:13:50.748 }, 00:13:50.748 { 00:13:50.748 "name": null, 00:13:50.748 "uuid": "f625e7b5-28fd-4865-8ff7-f8ab05c9c5a5", 00:13:50.748 "is_configured": false, 00:13:50.748 "data_offset": 0, 00:13:50.748 "data_size": 65536 00:13:50.748 }, 00:13:50.748 { 00:13:50.748 "name": "BaseBdev3", 00:13:50.748 "uuid": "f883d5d4-3e5c-427b-8fd0-d592c2f2ec93", 00:13:50.748 "is_configured": true, 00:13:50.748 "data_offset": 0, 00:13:50.748 "data_size": 65536 00:13:50.748 } 00:13:50.748 ] 00:13:50.748 }' 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.748 12:33:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.009 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.009 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.009 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:51.009 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.009 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.009 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:51.009 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:51.009 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.009 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.009 [2024-11-19 12:33:56.251966] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:51.009 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.009 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:51.009 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.009 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.009 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:51.009 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.009 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.009 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.009 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.009 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.009 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.009 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.009 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.009 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.009 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.271 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.271 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.271 "name": "Existed_Raid", 00:13:51.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.271 "strip_size_kb": 64, 00:13:51.271 "state": "configuring", 00:13:51.271 "raid_level": "raid5f", 00:13:51.271 "superblock": false, 00:13:51.271 "num_base_bdevs": 3, 00:13:51.271 "num_base_bdevs_discovered": 1, 00:13:51.271 "num_base_bdevs_operational": 3, 00:13:51.271 "base_bdevs_list": [ 00:13:51.271 { 00:13:51.271 "name": "BaseBdev1", 00:13:51.271 "uuid": "fdec0bd1-b929-403b-b5c2-4e077744c36c", 00:13:51.271 "is_configured": true, 00:13:51.271 "data_offset": 0, 00:13:51.271 "data_size": 65536 00:13:51.271 }, 00:13:51.271 { 00:13:51.271 "name": null, 00:13:51.271 "uuid": "f625e7b5-28fd-4865-8ff7-f8ab05c9c5a5", 00:13:51.271 "is_configured": false, 00:13:51.271 "data_offset": 0, 00:13:51.271 "data_size": 65536 00:13:51.271 }, 00:13:51.271 { 00:13:51.271 "name": null, 00:13:51.271 "uuid": "f883d5d4-3e5c-427b-8fd0-d592c2f2ec93", 00:13:51.271 "is_configured": false, 00:13:51.271 "data_offset": 0, 00:13:51.271 "data_size": 65536 00:13:51.271 } 00:13:51.271 ] 00:13:51.271 }' 00:13:51.271 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.271 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.544 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.544 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:51.544 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.544 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.544 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.544 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:51.544 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:51.544 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.544 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.544 [2024-11-19 12:33:56.731222] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:51.545 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.545 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:51.545 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.545 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.545 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:51.545 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.545 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.545 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.545 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.545 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.545 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.545 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.545 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.545 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.545 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.545 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.821 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.821 "name": "Existed_Raid", 00:13:51.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.821 "strip_size_kb": 64, 00:13:51.821 "state": "configuring", 00:13:51.821 "raid_level": "raid5f", 00:13:51.821 "superblock": false, 00:13:51.821 "num_base_bdevs": 3, 00:13:51.821 "num_base_bdevs_discovered": 2, 00:13:51.821 "num_base_bdevs_operational": 3, 00:13:51.821 "base_bdevs_list": [ 00:13:51.821 { 00:13:51.821 "name": "BaseBdev1", 00:13:51.821 "uuid": "fdec0bd1-b929-403b-b5c2-4e077744c36c", 00:13:51.821 "is_configured": true, 00:13:51.821 "data_offset": 0, 00:13:51.821 "data_size": 65536 00:13:51.821 }, 00:13:51.821 { 00:13:51.821 "name": null, 00:13:51.821 "uuid": "f625e7b5-28fd-4865-8ff7-f8ab05c9c5a5", 00:13:51.821 "is_configured": false, 00:13:51.821 "data_offset": 0, 00:13:51.821 "data_size": 65536 00:13:51.822 }, 00:13:51.822 { 00:13:51.822 "name": "BaseBdev3", 00:13:51.822 "uuid": "f883d5d4-3e5c-427b-8fd0-d592c2f2ec93", 00:13:51.822 "is_configured": true, 00:13:51.822 "data_offset": 0, 00:13:51.822 "data_size": 65536 00:13:51.822 } 00:13:51.822 ] 00:13:51.822 }' 00:13:51.822 12:33:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.822 12:33:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.082 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.082 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.082 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.082 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:52.082 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.082 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:52.082 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:52.082 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.082 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.082 [2024-11-19 12:33:57.206449] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:52.082 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.082 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:52.082 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.082 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.082 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:52.082 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.082 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:52.082 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.082 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.082 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.082 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.082 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.082 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.082 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.082 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.082 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.082 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.082 "name": "Existed_Raid", 00:13:52.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.083 "strip_size_kb": 64, 00:13:52.083 "state": "configuring", 00:13:52.083 "raid_level": "raid5f", 00:13:52.083 "superblock": false, 00:13:52.083 "num_base_bdevs": 3, 00:13:52.083 "num_base_bdevs_discovered": 1, 00:13:52.083 "num_base_bdevs_operational": 3, 00:13:52.083 "base_bdevs_list": [ 00:13:52.083 { 00:13:52.083 "name": null, 00:13:52.083 "uuid": "fdec0bd1-b929-403b-b5c2-4e077744c36c", 00:13:52.083 "is_configured": false, 00:13:52.083 "data_offset": 0, 00:13:52.083 "data_size": 65536 00:13:52.083 }, 00:13:52.083 { 00:13:52.083 "name": null, 00:13:52.083 "uuid": "f625e7b5-28fd-4865-8ff7-f8ab05c9c5a5", 00:13:52.083 "is_configured": false, 00:13:52.083 "data_offset": 0, 00:13:52.083 "data_size": 65536 00:13:52.083 }, 00:13:52.083 { 00:13:52.083 "name": "BaseBdev3", 00:13:52.083 "uuid": "f883d5d4-3e5c-427b-8fd0-d592c2f2ec93", 00:13:52.083 "is_configured": true, 00:13:52.083 "data_offset": 0, 00:13:52.083 "data_size": 65536 00:13:52.083 } 00:13:52.083 ] 00:13:52.083 }' 00:13:52.083 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.083 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.654 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:52.654 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.654 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.654 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.654 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.654 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:52.654 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:52.654 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.654 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.654 [2024-11-19 12:33:57.737659] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:52.654 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.654 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:52.654 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.654 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.654 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:52.654 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.654 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:52.654 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.654 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.654 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.654 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.654 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.654 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.654 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.654 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.654 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.654 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.654 "name": "Existed_Raid", 00:13:52.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.654 "strip_size_kb": 64, 00:13:52.654 "state": "configuring", 00:13:52.654 "raid_level": "raid5f", 00:13:52.654 "superblock": false, 00:13:52.654 "num_base_bdevs": 3, 00:13:52.654 "num_base_bdevs_discovered": 2, 00:13:52.654 "num_base_bdevs_operational": 3, 00:13:52.654 "base_bdevs_list": [ 00:13:52.654 { 00:13:52.654 "name": null, 00:13:52.654 "uuid": "fdec0bd1-b929-403b-b5c2-4e077744c36c", 00:13:52.654 "is_configured": false, 00:13:52.654 "data_offset": 0, 00:13:52.654 "data_size": 65536 00:13:52.654 }, 00:13:52.654 { 00:13:52.654 "name": "BaseBdev2", 00:13:52.654 "uuid": "f625e7b5-28fd-4865-8ff7-f8ab05c9c5a5", 00:13:52.654 "is_configured": true, 00:13:52.654 "data_offset": 0, 00:13:52.654 "data_size": 65536 00:13:52.654 }, 00:13:52.654 { 00:13:52.654 "name": "BaseBdev3", 00:13:52.654 "uuid": "f883d5d4-3e5c-427b-8fd0-d592c2f2ec93", 00:13:52.654 "is_configured": true, 00:13:52.654 "data_offset": 0, 00:13:52.654 "data_size": 65536 00:13:52.654 } 00:13:52.654 ] 00:13:52.654 }' 00:13:52.654 12:33:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.654 12:33:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fdec0bd1-b929-403b-b5c2-4e077744c36c 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.225 [2024-11-19 12:33:58.329559] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:53.225 [2024-11-19 12:33:58.329700] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:13:53.225 [2024-11-19 12:33:58.329733] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:53.225 [2024-11-19 12:33:58.330103] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:53.225 [2024-11-19 12:33:58.330659] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:13:53.225 [2024-11-19 12:33:58.330727] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:13:53.225 [2024-11-19 12:33:58.331033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.225 NewBaseBdev 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.225 [ 00:13:53.225 { 00:13:53.225 "name": "NewBaseBdev", 00:13:53.225 "aliases": [ 00:13:53.225 "fdec0bd1-b929-403b-b5c2-4e077744c36c" 00:13:53.225 ], 00:13:53.225 "product_name": "Malloc disk", 00:13:53.225 "block_size": 512, 00:13:53.225 "num_blocks": 65536, 00:13:53.225 "uuid": "fdec0bd1-b929-403b-b5c2-4e077744c36c", 00:13:53.225 "assigned_rate_limits": { 00:13:53.225 "rw_ios_per_sec": 0, 00:13:53.225 "rw_mbytes_per_sec": 0, 00:13:53.225 "r_mbytes_per_sec": 0, 00:13:53.225 "w_mbytes_per_sec": 0 00:13:53.225 }, 00:13:53.225 "claimed": true, 00:13:53.225 "claim_type": "exclusive_write", 00:13:53.225 "zoned": false, 00:13:53.225 "supported_io_types": { 00:13:53.225 "read": true, 00:13:53.225 "write": true, 00:13:53.225 "unmap": true, 00:13:53.225 "flush": true, 00:13:53.225 "reset": true, 00:13:53.225 "nvme_admin": false, 00:13:53.225 "nvme_io": false, 00:13:53.225 "nvme_io_md": false, 00:13:53.225 "write_zeroes": true, 00:13:53.225 "zcopy": true, 00:13:53.225 "get_zone_info": false, 00:13:53.225 "zone_management": false, 00:13:53.225 "zone_append": false, 00:13:53.225 "compare": false, 00:13:53.225 "compare_and_write": false, 00:13:53.225 "abort": true, 00:13:53.225 "seek_hole": false, 00:13:53.225 "seek_data": false, 00:13:53.225 "copy": true, 00:13:53.225 "nvme_iov_md": false 00:13:53.225 }, 00:13:53.225 "memory_domains": [ 00:13:53.225 { 00:13:53.225 "dma_device_id": "system", 00:13:53.225 "dma_device_type": 1 00:13:53.225 }, 00:13:53.225 { 00:13:53.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.225 "dma_device_type": 2 00:13:53.225 } 00:13:53.225 ], 00:13:53.225 "driver_specific": {} 00:13:53.225 } 00:13:53.225 ] 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.225 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.225 "name": "Existed_Raid", 00:13:53.225 "uuid": "f7d8c668-c1cc-4d99-9897-b04b89490db6", 00:13:53.225 "strip_size_kb": 64, 00:13:53.225 "state": "online", 00:13:53.225 "raid_level": "raid5f", 00:13:53.225 "superblock": false, 00:13:53.225 "num_base_bdevs": 3, 00:13:53.225 "num_base_bdevs_discovered": 3, 00:13:53.225 "num_base_bdevs_operational": 3, 00:13:53.225 "base_bdevs_list": [ 00:13:53.225 { 00:13:53.225 "name": "NewBaseBdev", 00:13:53.225 "uuid": "fdec0bd1-b929-403b-b5c2-4e077744c36c", 00:13:53.225 "is_configured": true, 00:13:53.225 "data_offset": 0, 00:13:53.225 "data_size": 65536 00:13:53.225 }, 00:13:53.225 { 00:13:53.226 "name": "BaseBdev2", 00:13:53.226 "uuid": "f625e7b5-28fd-4865-8ff7-f8ab05c9c5a5", 00:13:53.226 "is_configured": true, 00:13:53.226 "data_offset": 0, 00:13:53.226 "data_size": 65536 00:13:53.226 }, 00:13:53.226 { 00:13:53.226 "name": "BaseBdev3", 00:13:53.226 "uuid": "f883d5d4-3e5c-427b-8fd0-d592c2f2ec93", 00:13:53.226 "is_configured": true, 00:13:53.226 "data_offset": 0, 00:13:53.226 "data_size": 65536 00:13:53.226 } 00:13:53.226 ] 00:13:53.226 }' 00:13:53.226 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.226 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.795 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:53.795 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:53.795 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:53.795 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:53.795 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:53.795 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:53.795 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:53.795 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:53.795 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.795 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.795 [2024-11-19 12:33:58.844969] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:53.795 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.795 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:53.795 "name": "Existed_Raid", 00:13:53.795 "aliases": [ 00:13:53.795 "f7d8c668-c1cc-4d99-9897-b04b89490db6" 00:13:53.795 ], 00:13:53.795 "product_name": "Raid Volume", 00:13:53.795 "block_size": 512, 00:13:53.795 "num_blocks": 131072, 00:13:53.795 "uuid": "f7d8c668-c1cc-4d99-9897-b04b89490db6", 00:13:53.795 "assigned_rate_limits": { 00:13:53.795 "rw_ios_per_sec": 0, 00:13:53.795 "rw_mbytes_per_sec": 0, 00:13:53.795 "r_mbytes_per_sec": 0, 00:13:53.795 "w_mbytes_per_sec": 0 00:13:53.795 }, 00:13:53.795 "claimed": false, 00:13:53.795 "zoned": false, 00:13:53.795 "supported_io_types": { 00:13:53.795 "read": true, 00:13:53.795 "write": true, 00:13:53.795 "unmap": false, 00:13:53.795 "flush": false, 00:13:53.795 "reset": true, 00:13:53.795 "nvme_admin": false, 00:13:53.795 "nvme_io": false, 00:13:53.795 "nvme_io_md": false, 00:13:53.795 "write_zeroes": true, 00:13:53.795 "zcopy": false, 00:13:53.795 "get_zone_info": false, 00:13:53.795 "zone_management": false, 00:13:53.795 "zone_append": false, 00:13:53.795 "compare": false, 00:13:53.795 "compare_and_write": false, 00:13:53.795 "abort": false, 00:13:53.795 "seek_hole": false, 00:13:53.795 "seek_data": false, 00:13:53.795 "copy": false, 00:13:53.795 "nvme_iov_md": false 00:13:53.795 }, 00:13:53.795 "driver_specific": { 00:13:53.795 "raid": { 00:13:53.795 "uuid": "f7d8c668-c1cc-4d99-9897-b04b89490db6", 00:13:53.795 "strip_size_kb": 64, 00:13:53.795 "state": "online", 00:13:53.795 "raid_level": "raid5f", 00:13:53.795 "superblock": false, 00:13:53.795 "num_base_bdevs": 3, 00:13:53.795 "num_base_bdevs_discovered": 3, 00:13:53.795 "num_base_bdevs_operational": 3, 00:13:53.795 "base_bdevs_list": [ 00:13:53.795 { 00:13:53.795 "name": "NewBaseBdev", 00:13:53.795 "uuid": "fdec0bd1-b929-403b-b5c2-4e077744c36c", 00:13:53.795 "is_configured": true, 00:13:53.795 "data_offset": 0, 00:13:53.795 "data_size": 65536 00:13:53.795 }, 00:13:53.795 { 00:13:53.795 "name": "BaseBdev2", 00:13:53.795 "uuid": "f625e7b5-28fd-4865-8ff7-f8ab05c9c5a5", 00:13:53.795 "is_configured": true, 00:13:53.795 "data_offset": 0, 00:13:53.795 "data_size": 65536 00:13:53.795 }, 00:13:53.795 { 00:13:53.795 "name": "BaseBdev3", 00:13:53.795 "uuid": "f883d5d4-3e5c-427b-8fd0-d592c2f2ec93", 00:13:53.795 "is_configured": true, 00:13:53.795 "data_offset": 0, 00:13:53.795 "data_size": 65536 00:13:53.795 } 00:13:53.795 ] 00:13:53.795 } 00:13:53.795 } 00:13:53.795 }' 00:13:53.795 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:53.795 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:53.795 BaseBdev2 00:13:53.795 BaseBdev3' 00:13:53.795 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.795 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:53.795 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.795 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.795 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:53.795 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.795 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.795 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.795 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.795 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.795 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.795 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:53.795 12:33:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.795 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.795 12:33:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.795 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.795 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.795 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.795 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.795 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:53.796 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.796 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.796 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:54.056 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.056 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:54.056 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:54.056 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:54.056 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.056 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.056 [2024-11-19 12:33:59.076292] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:54.056 [2024-11-19 12:33:59.076383] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:54.056 [2024-11-19 12:33:59.076494] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:54.056 [2024-11-19 12:33:59.076801] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:54.056 [2024-11-19 12:33:59.076819] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:13:54.056 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.056 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 90668 00:13:54.056 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 90668 ']' 00:13:54.056 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 90668 00:13:54.056 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:13:54.056 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:54.056 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90668 00:13:54.056 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:54.056 killing process with pid 90668 00:13:54.056 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:54.056 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90668' 00:13:54.056 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 90668 00:13:54.056 [2024-11-19 12:33:59.126288] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:54.056 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 90668 00:13:54.056 [2024-11-19 12:33:59.187524] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:54.316 12:33:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:54.316 00:13:54.316 real 0m9.114s 00:13:54.316 user 0m15.276s 00:13:54.316 sys 0m1.923s 00:13:54.316 ************************************ 00:13:54.316 END TEST raid5f_state_function_test 00:13:54.316 ************************************ 00:13:54.316 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:54.316 12:33:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.576 12:33:59 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:13:54.576 12:33:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:54.576 12:33:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:54.576 12:33:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:54.576 ************************************ 00:13:54.576 START TEST raid5f_state_function_test_sb 00:13:54.576 ************************************ 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=91273 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 91273' 00:13:54.576 Process raid pid: 91273 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 91273 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 91273 ']' 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:54.576 12:33:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.576 [2024-11-19 12:33:59.763290] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:54.576 [2024-11-19 12:33:59.763540] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.837 [2024-11-19 12:33:59.933409] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.837 [2024-11-19 12:34:00.010861] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.837 [2024-11-19 12:34:00.087811] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:54.837 [2024-11-19 12:34:00.087854] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:55.407 12:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:55.407 12:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:55.407 12:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:55.407 12:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.407 12:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.407 [2024-11-19 12:34:00.583194] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:55.407 [2024-11-19 12:34:00.583257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:55.407 [2024-11-19 12:34:00.583307] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:55.407 [2024-11-19 12:34:00.583321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:55.407 [2024-11-19 12:34:00.583329] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:55.407 [2024-11-19 12:34:00.583343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:55.407 12:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.407 12:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:55.407 12:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.407 12:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.407 12:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:55.407 12:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.407 12:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.407 12:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.407 12:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.407 12:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.407 12:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.407 12:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.407 12:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.407 12:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.407 12:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.407 12:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.407 12:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.407 "name": "Existed_Raid", 00:13:55.407 "uuid": "b3704958-3a7f-4709-8fde-6a3dbd658a86", 00:13:55.407 "strip_size_kb": 64, 00:13:55.407 "state": "configuring", 00:13:55.407 "raid_level": "raid5f", 00:13:55.407 "superblock": true, 00:13:55.407 "num_base_bdevs": 3, 00:13:55.407 "num_base_bdevs_discovered": 0, 00:13:55.407 "num_base_bdevs_operational": 3, 00:13:55.407 "base_bdevs_list": [ 00:13:55.407 { 00:13:55.407 "name": "BaseBdev1", 00:13:55.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.407 "is_configured": false, 00:13:55.407 "data_offset": 0, 00:13:55.407 "data_size": 0 00:13:55.407 }, 00:13:55.407 { 00:13:55.407 "name": "BaseBdev2", 00:13:55.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.407 "is_configured": false, 00:13:55.407 "data_offset": 0, 00:13:55.407 "data_size": 0 00:13:55.407 }, 00:13:55.407 { 00:13:55.407 "name": "BaseBdev3", 00:13:55.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.407 "is_configured": false, 00:13:55.407 "data_offset": 0, 00:13:55.407 "data_size": 0 00:13:55.407 } 00:13:55.407 ] 00:13:55.407 }' 00:13:55.407 12:34:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.407 12:34:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.978 [2024-11-19 12:34:01.030374] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:55.978 [2024-11-19 12:34:01.030485] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.978 [2024-11-19 12:34:01.042394] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:55.978 [2024-11-19 12:34:01.042506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:55.978 [2024-11-19 12:34:01.042539] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:55.978 [2024-11-19 12:34:01.042568] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:55.978 [2024-11-19 12:34:01.042590] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:55.978 [2024-11-19 12:34:01.042664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.978 [2024-11-19 12:34:01.069838] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:55.978 BaseBdev1 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.978 [ 00:13:55.978 { 00:13:55.978 "name": "BaseBdev1", 00:13:55.978 "aliases": [ 00:13:55.978 "d32beb70-1060-4a74-9e0c-0d47f3a93498" 00:13:55.978 ], 00:13:55.978 "product_name": "Malloc disk", 00:13:55.978 "block_size": 512, 00:13:55.978 "num_blocks": 65536, 00:13:55.978 "uuid": "d32beb70-1060-4a74-9e0c-0d47f3a93498", 00:13:55.978 "assigned_rate_limits": { 00:13:55.978 "rw_ios_per_sec": 0, 00:13:55.978 "rw_mbytes_per_sec": 0, 00:13:55.978 "r_mbytes_per_sec": 0, 00:13:55.978 "w_mbytes_per_sec": 0 00:13:55.978 }, 00:13:55.978 "claimed": true, 00:13:55.978 "claim_type": "exclusive_write", 00:13:55.978 "zoned": false, 00:13:55.978 "supported_io_types": { 00:13:55.978 "read": true, 00:13:55.978 "write": true, 00:13:55.978 "unmap": true, 00:13:55.978 "flush": true, 00:13:55.978 "reset": true, 00:13:55.978 "nvme_admin": false, 00:13:55.978 "nvme_io": false, 00:13:55.978 "nvme_io_md": false, 00:13:55.978 "write_zeroes": true, 00:13:55.978 "zcopy": true, 00:13:55.978 "get_zone_info": false, 00:13:55.978 "zone_management": false, 00:13:55.978 "zone_append": false, 00:13:55.978 "compare": false, 00:13:55.978 "compare_and_write": false, 00:13:55.978 "abort": true, 00:13:55.978 "seek_hole": false, 00:13:55.978 "seek_data": false, 00:13:55.978 "copy": true, 00:13:55.978 "nvme_iov_md": false 00:13:55.978 }, 00:13:55.978 "memory_domains": [ 00:13:55.978 { 00:13:55.978 "dma_device_id": "system", 00:13:55.978 "dma_device_type": 1 00:13:55.978 }, 00:13:55.978 { 00:13:55.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.978 "dma_device_type": 2 00:13:55.978 } 00:13:55.978 ], 00:13:55.978 "driver_specific": {} 00:13:55.978 } 00:13:55.978 ] 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.978 "name": "Existed_Raid", 00:13:55.978 "uuid": "a10ad934-a17b-4a4d-807f-e640bbcc66a7", 00:13:55.978 "strip_size_kb": 64, 00:13:55.978 "state": "configuring", 00:13:55.978 "raid_level": "raid5f", 00:13:55.978 "superblock": true, 00:13:55.978 "num_base_bdevs": 3, 00:13:55.978 "num_base_bdevs_discovered": 1, 00:13:55.978 "num_base_bdevs_operational": 3, 00:13:55.978 "base_bdevs_list": [ 00:13:55.978 { 00:13:55.978 "name": "BaseBdev1", 00:13:55.978 "uuid": "d32beb70-1060-4a74-9e0c-0d47f3a93498", 00:13:55.978 "is_configured": true, 00:13:55.978 "data_offset": 2048, 00:13:55.978 "data_size": 63488 00:13:55.978 }, 00:13:55.978 { 00:13:55.978 "name": "BaseBdev2", 00:13:55.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.978 "is_configured": false, 00:13:55.978 "data_offset": 0, 00:13:55.978 "data_size": 0 00:13:55.978 }, 00:13:55.978 { 00:13:55.978 "name": "BaseBdev3", 00:13:55.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.978 "is_configured": false, 00:13:55.978 "data_offset": 0, 00:13:55.978 "data_size": 0 00:13:55.978 } 00:13:55.978 ] 00:13:55.978 }' 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.978 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.547 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:56.547 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.547 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.547 [2024-11-19 12:34:01.549000] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:56.547 [2024-11-19 12:34:01.549115] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:13:56.547 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.547 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:56.547 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.547 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.547 [2024-11-19 12:34:01.561052] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:56.547 [2024-11-19 12:34:01.563247] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:56.547 [2024-11-19 12:34:01.563302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:56.547 [2024-11-19 12:34:01.563313] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:56.547 [2024-11-19 12:34:01.563343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:56.547 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.547 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:56.547 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:56.547 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:56.547 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.547 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.547 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:56.547 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.547 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.547 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.547 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.547 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.547 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.547 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.547 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.547 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.547 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.547 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.547 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.547 "name": "Existed_Raid", 00:13:56.547 "uuid": "ad6391a7-fdba-4a9b-83b2-c6414c21410f", 00:13:56.547 "strip_size_kb": 64, 00:13:56.547 "state": "configuring", 00:13:56.547 "raid_level": "raid5f", 00:13:56.547 "superblock": true, 00:13:56.548 "num_base_bdevs": 3, 00:13:56.548 "num_base_bdevs_discovered": 1, 00:13:56.548 "num_base_bdevs_operational": 3, 00:13:56.548 "base_bdevs_list": [ 00:13:56.548 { 00:13:56.548 "name": "BaseBdev1", 00:13:56.548 "uuid": "d32beb70-1060-4a74-9e0c-0d47f3a93498", 00:13:56.548 "is_configured": true, 00:13:56.548 "data_offset": 2048, 00:13:56.548 "data_size": 63488 00:13:56.548 }, 00:13:56.548 { 00:13:56.548 "name": "BaseBdev2", 00:13:56.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.548 "is_configured": false, 00:13:56.548 "data_offset": 0, 00:13:56.548 "data_size": 0 00:13:56.548 }, 00:13:56.548 { 00:13:56.548 "name": "BaseBdev3", 00:13:56.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.548 "is_configured": false, 00:13:56.548 "data_offset": 0, 00:13:56.548 "data_size": 0 00:13:56.548 } 00:13:56.548 ] 00:13:56.548 }' 00:13:56.548 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.548 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.808 12:34:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:56.808 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.808 12:34:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.808 [2024-11-19 12:34:02.016570] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:56.808 BaseBdev2 00:13:56.808 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.808 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:56.808 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:56.808 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:56.808 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:56.808 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:56.808 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:56.808 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:56.808 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.808 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.808 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.808 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:56.808 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.808 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.808 [ 00:13:56.808 { 00:13:56.808 "name": "BaseBdev2", 00:13:56.808 "aliases": [ 00:13:56.808 "435a1ae9-f402-463e-b8f6-994c6e45b9cb" 00:13:56.808 ], 00:13:56.808 "product_name": "Malloc disk", 00:13:56.808 "block_size": 512, 00:13:56.808 "num_blocks": 65536, 00:13:56.808 "uuid": "435a1ae9-f402-463e-b8f6-994c6e45b9cb", 00:13:56.808 "assigned_rate_limits": { 00:13:56.808 "rw_ios_per_sec": 0, 00:13:56.808 "rw_mbytes_per_sec": 0, 00:13:56.808 "r_mbytes_per_sec": 0, 00:13:56.808 "w_mbytes_per_sec": 0 00:13:56.808 }, 00:13:56.808 "claimed": true, 00:13:56.808 "claim_type": "exclusive_write", 00:13:56.808 "zoned": false, 00:13:56.808 "supported_io_types": { 00:13:56.808 "read": true, 00:13:56.808 "write": true, 00:13:56.808 "unmap": true, 00:13:56.808 "flush": true, 00:13:56.808 "reset": true, 00:13:56.808 "nvme_admin": false, 00:13:56.808 "nvme_io": false, 00:13:56.808 "nvme_io_md": false, 00:13:56.808 "write_zeroes": true, 00:13:56.808 "zcopy": true, 00:13:56.808 "get_zone_info": false, 00:13:56.808 "zone_management": false, 00:13:56.808 "zone_append": false, 00:13:56.808 "compare": false, 00:13:56.808 "compare_and_write": false, 00:13:56.808 "abort": true, 00:13:56.808 "seek_hole": false, 00:13:56.808 "seek_data": false, 00:13:56.808 "copy": true, 00:13:56.808 "nvme_iov_md": false 00:13:56.808 }, 00:13:56.808 "memory_domains": [ 00:13:56.808 { 00:13:56.809 "dma_device_id": "system", 00:13:56.809 "dma_device_type": 1 00:13:56.809 }, 00:13:56.809 { 00:13:56.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.809 "dma_device_type": 2 00:13:56.809 } 00:13:56.809 ], 00:13:56.809 "driver_specific": {} 00:13:56.809 } 00:13:56.809 ] 00:13:56.809 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.809 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:56.809 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:56.809 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:56.809 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:56.809 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.809 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.809 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:56.809 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.809 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.809 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.809 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.809 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.809 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.809 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.809 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.809 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.069 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.069 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.069 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.069 "name": "Existed_Raid", 00:13:57.069 "uuid": "ad6391a7-fdba-4a9b-83b2-c6414c21410f", 00:13:57.069 "strip_size_kb": 64, 00:13:57.069 "state": "configuring", 00:13:57.069 "raid_level": "raid5f", 00:13:57.069 "superblock": true, 00:13:57.069 "num_base_bdevs": 3, 00:13:57.069 "num_base_bdevs_discovered": 2, 00:13:57.069 "num_base_bdevs_operational": 3, 00:13:57.069 "base_bdevs_list": [ 00:13:57.069 { 00:13:57.069 "name": "BaseBdev1", 00:13:57.069 "uuid": "d32beb70-1060-4a74-9e0c-0d47f3a93498", 00:13:57.069 "is_configured": true, 00:13:57.069 "data_offset": 2048, 00:13:57.069 "data_size": 63488 00:13:57.069 }, 00:13:57.069 { 00:13:57.069 "name": "BaseBdev2", 00:13:57.069 "uuid": "435a1ae9-f402-463e-b8f6-994c6e45b9cb", 00:13:57.069 "is_configured": true, 00:13:57.069 "data_offset": 2048, 00:13:57.069 "data_size": 63488 00:13:57.069 }, 00:13:57.069 { 00:13:57.069 "name": "BaseBdev3", 00:13:57.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.069 "is_configured": false, 00:13:57.069 "data_offset": 0, 00:13:57.069 "data_size": 0 00:13:57.069 } 00:13:57.069 ] 00:13:57.069 }' 00:13:57.069 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.069 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.329 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:57.329 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.329 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.329 [2024-11-19 12:34:02.500767] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:57.329 [2024-11-19 12:34:02.501047] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:57.329 [2024-11-19 12:34:02.501071] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:57.329 [2024-11-19 12:34:02.501418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:57.329 BaseBdev3 00:13:57.329 [2024-11-19 12:34:02.501946] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:57.329 [2024-11-19 12:34:02.501960] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:13:57.329 [2024-11-19 12:34:02.502111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.329 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.329 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:57.329 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:57.329 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:57.329 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:57.329 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:57.329 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:57.329 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:57.330 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.330 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.330 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.330 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:57.330 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.330 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.330 [ 00:13:57.330 { 00:13:57.330 "name": "BaseBdev3", 00:13:57.330 "aliases": [ 00:13:57.330 "6d69523f-0d80-4c75-b445-b23744b1e596" 00:13:57.330 ], 00:13:57.330 "product_name": "Malloc disk", 00:13:57.330 "block_size": 512, 00:13:57.330 "num_blocks": 65536, 00:13:57.330 "uuid": "6d69523f-0d80-4c75-b445-b23744b1e596", 00:13:57.330 "assigned_rate_limits": { 00:13:57.330 "rw_ios_per_sec": 0, 00:13:57.330 "rw_mbytes_per_sec": 0, 00:13:57.330 "r_mbytes_per_sec": 0, 00:13:57.330 "w_mbytes_per_sec": 0 00:13:57.330 }, 00:13:57.330 "claimed": true, 00:13:57.330 "claim_type": "exclusive_write", 00:13:57.330 "zoned": false, 00:13:57.330 "supported_io_types": { 00:13:57.330 "read": true, 00:13:57.330 "write": true, 00:13:57.330 "unmap": true, 00:13:57.330 "flush": true, 00:13:57.330 "reset": true, 00:13:57.330 "nvme_admin": false, 00:13:57.330 "nvme_io": false, 00:13:57.330 "nvme_io_md": false, 00:13:57.330 "write_zeroes": true, 00:13:57.330 "zcopy": true, 00:13:57.330 "get_zone_info": false, 00:13:57.330 "zone_management": false, 00:13:57.330 "zone_append": false, 00:13:57.330 "compare": false, 00:13:57.330 "compare_and_write": false, 00:13:57.330 "abort": true, 00:13:57.330 "seek_hole": false, 00:13:57.330 "seek_data": false, 00:13:57.330 "copy": true, 00:13:57.330 "nvme_iov_md": false 00:13:57.330 }, 00:13:57.330 "memory_domains": [ 00:13:57.330 { 00:13:57.330 "dma_device_id": "system", 00:13:57.330 "dma_device_type": 1 00:13:57.330 }, 00:13:57.330 { 00:13:57.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.330 "dma_device_type": 2 00:13:57.330 } 00:13:57.330 ], 00:13:57.330 "driver_specific": {} 00:13:57.330 } 00:13:57.330 ] 00:13:57.330 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.330 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:57.330 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:57.330 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:57.330 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:57.330 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.330 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.330 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:57.330 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.330 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:57.330 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.330 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.330 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.330 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.330 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.330 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.330 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.330 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.330 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.590 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.590 "name": "Existed_Raid", 00:13:57.590 "uuid": "ad6391a7-fdba-4a9b-83b2-c6414c21410f", 00:13:57.590 "strip_size_kb": 64, 00:13:57.590 "state": "online", 00:13:57.590 "raid_level": "raid5f", 00:13:57.590 "superblock": true, 00:13:57.590 "num_base_bdevs": 3, 00:13:57.590 "num_base_bdevs_discovered": 3, 00:13:57.590 "num_base_bdevs_operational": 3, 00:13:57.590 "base_bdevs_list": [ 00:13:57.590 { 00:13:57.590 "name": "BaseBdev1", 00:13:57.590 "uuid": "d32beb70-1060-4a74-9e0c-0d47f3a93498", 00:13:57.590 "is_configured": true, 00:13:57.590 "data_offset": 2048, 00:13:57.590 "data_size": 63488 00:13:57.590 }, 00:13:57.590 { 00:13:57.590 "name": "BaseBdev2", 00:13:57.590 "uuid": "435a1ae9-f402-463e-b8f6-994c6e45b9cb", 00:13:57.590 "is_configured": true, 00:13:57.590 "data_offset": 2048, 00:13:57.590 "data_size": 63488 00:13:57.590 }, 00:13:57.590 { 00:13:57.590 "name": "BaseBdev3", 00:13:57.590 "uuid": "6d69523f-0d80-4c75-b445-b23744b1e596", 00:13:57.590 "is_configured": true, 00:13:57.590 "data_offset": 2048, 00:13:57.590 "data_size": 63488 00:13:57.590 } 00:13:57.590 ] 00:13:57.590 }' 00:13:57.590 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.590 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.851 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:57.851 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:57.851 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:57.851 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:57.851 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:57.851 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:57.851 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:57.851 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.851 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:57.851 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.851 [2024-11-19 12:34:02.968170] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:57.851 12:34:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.851 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:57.851 "name": "Existed_Raid", 00:13:57.851 "aliases": [ 00:13:57.851 "ad6391a7-fdba-4a9b-83b2-c6414c21410f" 00:13:57.851 ], 00:13:57.851 "product_name": "Raid Volume", 00:13:57.851 "block_size": 512, 00:13:57.851 "num_blocks": 126976, 00:13:57.851 "uuid": "ad6391a7-fdba-4a9b-83b2-c6414c21410f", 00:13:57.851 "assigned_rate_limits": { 00:13:57.851 "rw_ios_per_sec": 0, 00:13:57.851 "rw_mbytes_per_sec": 0, 00:13:57.851 "r_mbytes_per_sec": 0, 00:13:57.851 "w_mbytes_per_sec": 0 00:13:57.851 }, 00:13:57.851 "claimed": false, 00:13:57.851 "zoned": false, 00:13:57.851 "supported_io_types": { 00:13:57.851 "read": true, 00:13:57.851 "write": true, 00:13:57.851 "unmap": false, 00:13:57.851 "flush": false, 00:13:57.851 "reset": true, 00:13:57.851 "nvme_admin": false, 00:13:57.851 "nvme_io": false, 00:13:57.851 "nvme_io_md": false, 00:13:57.851 "write_zeroes": true, 00:13:57.851 "zcopy": false, 00:13:57.851 "get_zone_info": false, 00:13:57.851 "zone_management": false, 00:13:57.851 "zone_append": false, 00:13:57.851 "compare": false, 00:13:57.851 "compare_and_write": false, 00:13:57.851 "abort": false, 00:13:57.851 "seek_hole": false, 00:13:57.851 "seek_data": false, 00:13:57.851 "copy": false, 00:13:57.851 "nvme_iov_md": false 00:13:57.851 }, 00:13:57.851 "driver_specific": { 00:13:57.851 "raid": { 00:13:57.851 "uuid": "ad6391a7-fdba-4a9b-83b2-c6414c21410f", 00:13:57.851 "strip_size_kb": 64, 00:13:57.851 "state": "online", 00:13:57.851 "raid_level": "raid5f", 00:13:57.851 "superblock": true, 00:13:57.851 "num_base_bdevs": 3, 00:13:57.851 "num_base_bdevs_discovered": 3, 00:13:57.851 "num_base_bdevs_operational": 3, 00:13:57.851 "base_bdevs_list": [ 00:13:57.851 { 00:13:57.851 "name": "BaseBdev1", 00:13:57.851 "uuid": "d32beb70-1060-4a74-9e0c-0d47f3a93498", 00:13:57.851 "is_configured": true, 00:13:57.851 "data_offset": 2048, 00:13:57.851 "data_size": 63488 00:13:57.851 }, 00:13:57.851 { 00:13:57.851 "name": "BaseBdev2", 00:13:57.851 "uuid": "435a1ae9-f402-463e-b8f6-994c6e45b9cb", 00:13:57.851 "is_configured": true, 00:13:57.851 "data_offset": 2048, 00:13:57.851 "data_size": 63488 00:13:57.851 }, 00:13:57.851 { 00:13:57.851 "name": "BaseBdev3", 00:13:57.851 "uuid": "6d69523f-0d80-4c75-b445-b23744b1e596", 00:13:57.851 "is_configured": true, 00:13:57.851 "data_offset": 2048, 00:13:57.851 "data_size": 63488 00:13:57.851 } 00:13:57.851 ] 00:13:57.851 } 00:13:57.851 } 00:13:57.851 }' 00:13:57.851 12:34:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:57.851 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:57.851 BaseBdev2 00:13:57.851 BaseBdev3' 00:13:57.851 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.851 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:57.851 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:57.851 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.852 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:57.852 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.852 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.852 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.112 [2024-11-19 12:34:03.235731] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.112 "name": "Existed_Raid", 00:13:58.112 "uuid": "ad6391a7-fdba-4a9b-83b2-c6414c21410f", 00:13:58.112 "strip_size_kb": 64, 00:13:58.112 "state": "online", 00:13:58.112 "raid_level": "raid5f", 00:13:58.112 "superblock": true, 00:13:58.112 "num_base_bdevs": 3, 00:13:58.112 "num_base_bdevs_discovered": 2, 00:13:58.112 "num_base_bdevs_operational": 2, 00:13:58.112 "base_bdevs_list": [ 00:13:58.112 { 00:13:58.112 "name": null, 00:13:58.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.112 "is_configured": false, 00:13:58.112 "data_offset": 0, 00:13:58.112 "data_size": 63488 00:13:58.112 }, 00:13:58.112 { 00:13:58.112 "name": "BaseBdev2", 00:13:58.112 "uuid": "435a1ae9-f402-463e-b8f6-994c6e45b9cb", 00:13:58.112 "is_configured": true, 00:13:58.112 "data_offset": 2048, 00:13:58.112 "data_size": 63488 00:13:58.112 }, 00:13:58.112 { 00:13:58.112 "name": "BaseBdev3", 00:13:58.112 "uuid": "6d69523f-0d80-4c75-b445-b23744b1e596", 00:13:58.112 "is_configured": true, 00:13:58.112 "data_offset": 2048, 00:13:58.112 "data_size": 63488 00:13:58.112 } 00:13:58.112 ] 00:13:58.112 }' 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.112 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.681 [2024-11-19 12:34:03.707989] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:58.681 [2024-11-19 12:34:03.708249] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:58.681 [2024-11-19 12:34:03.728763] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.681 [2024-11-19 12:34:03.788680] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:58.681 [2024-11-19 12:34:03.788757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.681 BaseBdev2 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.681 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.681 [ 00:13:58.681 { 00:13:58.681 "name": "BaseBdev2", 00:13:58.681 "aliases": [ 00:13:58.681 "bb809f45-f2a4-4a18-916a-bf1447684d95" 00:13:58.681 ], 00:13:58.681 "product_name": "Malloc disk", 00:13:58.681 "block_size": 512, 00:13:58.681 "num_blocks": 65536, 00:13:58.681 "uuid": "bb809f45-f2a4-4a18-916a-bf1447684d95", 00:13:58.681 "assigned_rate_limits": { 00:13:58.681 "rw_ios_per_sec": 0, 00:13:58.681 "rw_mbytes_per_sec": 0, 00:13:58.681 "r_mbytes_per_sec": 0, 00:13:58.681 "w_mbytes_per_sec": 0 00:13:58.681 }, 00:13:58.681 "claimed": false, 00:13:58.681 "zoned": false, 00:13:58.681 "supported_io_types": { 00:13:58.681 "read": true, 00:13:58.681 "write": true, 00:13:58.681 "unmap": true, 00:13:58.682 "flush": true, 00:13:58.682 "reset": true, 00:13:58.682 "nvme_admin": false, 00:13:58.682 "nvme_io": false, 00:13:58.682 "nvme_io_md": false, 00:13:58.682 "write_zeroes": true, 00:13:58.682 "zcopy": true, 00:13:58.682 "get_zone_info": false, 00:13:58.682 "zone_management": false, 00:13:58.682 "zone_append": false, 00:13:58.682 "compare": false, 00:13:58.682 "compare_and_write": false, 00:13:58.682 "abort": true, 00:13:58.682 "seek_hole": false, 00:13:58.682 "seek_data": false, 00:13:58.682 "copy": true, 00:13:58.682 "nvme_iov_md": false 00:13:58.682 }, 00:13:58.682 "memory_domains": [ 00:13:58.682 { 00:13:58.682 "dma_device_id": "system", 00:13:58.682 "dma_device_type": 1 00:13:58.682 }, 00:13:58.682 { 00:13:58.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.682 "dma_device_type": 2 00:13:58.682 } 00:13:58.682 ], 00:13:58.682 "driver_specific": {} 00:13:58.682 } 00:13:58.682 ] 00:13:58.682 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.682 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:58.682 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:58.682 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:58.682 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:58.682 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.682 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.942 BaseBdev3 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.942 [ 00:13:58.942 { 00:13:58.942 "name": "BaseBdev3", 00:13:58.942 "aliases": [ 00:13:58.942 "e1b5f541-dad3-4849-a510-f4f2cfcf087f" 00:13:58.942 ], 00:13:58.942 "product_name": "Malloc disk", 00:13:58.942 "block_size": 512, 00:13:58.942 "num_blocks": 65536, 00:13:58.942 "uuid": "e1b5f541-dad3-4849-a510-f4f2cfcf087f", 00:13:58.942 "assigned_rate_limits": { 00:13:58.942 "rw_ios_per_sec": 0, 00:13:58.942 "rw_mbytes_per_sec": 0, 00:13:58.942 "r_mbytes_per_sec": 0, 00:13:58.942 "w_mbytes_per_sec": 0 00:13:58.942 }, 00:13:58.942 "claimed": false, 00:13:58.942 "zoned": false, 00:13:58.942 "supported_io_types": { 00:13:58.942 "read": true, 00:13:58.942 "write": true, 00:13:58.942 "unmap": true, 00:13:58.942 "flush": true, 00:13:58.942 "reset": true, 00:13:58.942 "nvme_admin": false, 00:13:58.942 "nvme_io": false, 00:13:58.942 "nvme_io_md": false, 00:13:58.942 "write_zeroes": true, 00:13:58.942 "zcopy": true, 00:13:58.942 "get_zone_info": false, 00:13:58.942 "zone_management": false, 00:13:58.942 "zone_append": false, 00:13:58.942 "compare": false, 00:13:58.942 "compare_and_write": false, 00:13:58.942 "abort": true, 00:13:58.942 "seek_hole": false, 00:13:58.942 "seek_data": false, 00:13:58.942 "copy": true, 00:13:58.942 "nvme_iov_md": false 00:13:58.942 }, 00:13:58.942 "memory_domains": [ 00:13:58.942 { 00:13:58.942 "dma_device_id": "system", 00:13:58.942 "dma_device_type": 1 00:13:58.942 }, 00:13:58.942 { 00:13:58.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.942 "dma_device_type": 2 00:13:58.942 } 00:13:58.942 ], 00:13:58.942 "driver_specific": {} 00:13:58.942 } 00:13:58.942 ] 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.942 [2024-11-19 12:34:03.988123] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:58.942 [2024-11-19 12:34:03.988281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:58.942 [2024-11-19 12:34:03.988335] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:58.942 [2024-11-19 12:34:03.990496] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.942 12:34:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.942 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.942 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.942 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.942 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.942 "name": "Existed_Raid", 00:13:58.942 "uuid": "87b89c21-6e11-47aa-ac9a-4dae70cdf165", 00:13:58.942 "strip_size_kb": 64, 00:13:58.942 "state": "configuring", 00:13:58.942 "raid_level": "raid5f", 00:13:58.942 "superblock": true, 00:13:58.942 "num_base_bdevs": 3, 00:13:58.942 "num_base_bdevs_discovered": 2, 00:13:58.942 "num_base_bdevs_operational": 3, 00:13:58.942 "base_bdevs_list": [ 00:13:58.942 { 00:13:58.942 "name": "BaseBdev1", 00:13:58.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.942 "is_configured": false, 00:13:58.942 "data_offset": 0, 00:13:58.942 "data_size": 0 00:13:58.942 }, 00:13:58.942 { 00:13:58.942 "name": "BaseBdev2", 00:13:58.942 "uuid": "bb809f45-f2a4-4a18-916a-bf1447684d95", 00:13:58.942 "is_configured": true, 00:13:58.942 "data_offset": 2048, 00:13:58.942 "data_size": 63488 00:13:58.942 }, 00:13:58.942 { 00:13:58.942 "name": "BaseBdev3", 00:13:58.942 "uuid": "e1b5f541-dad3-4849-a510-f4f2cfcf087f", 00:13:58.942 "is_configured": true, 00:13:58.942 "data_offset": 2048, 00:13:58.942 "data_size": 63488 00:13:58.942 } 00:13:58.942 ] 00:13:58.942 }' 00:13:58.942 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.943 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.202 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:59.202 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.202 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.202 [2024-11-19 12:34:04.431492] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:59.202 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.202 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:59.202 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.203 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:59.203 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:59.203 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.203 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:59.203 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.203 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.203 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.203 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.203 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.203 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.203 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.203 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.462 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.462 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.462 "name": "Existed_Raid", 00:13:59.462 "uuid": "87b89c21-6e11-47aa-ac9a-4dae70cdf165", 00:13:59.462 "strip_size_kb": 64, 00:13:59.462 "state": "configuring", 00:13:59.462 "raid_level": "raid5f", 00:13:59.462 "superblock": true, 00:13:59.462 "num_base_bdevs": 3, 00:13:59.462 "num_base_bdevs_discovered": 1, 00:13:59.462 "num_base_bdevs_operational": 3, 00:13:59.462 "base_bdevs_list": [ 00:13:59.462 { 00:13:59.462 "name": "BaseBdev1", 00:13:59.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.462 "is_configured": false, 00:13:59.462 "data_offset": 0, 00:13:59.462 "data_size": 0 00:13:59.462 }, 00:13:59.462 { 00:13:59.462 "name": null, 00:13:59.462 "uuid": "bb809f45-f2a4-4a18-916a-bf1447684d95", 00:13:59.462 "is_configured": false, 00:13:59.462 "data_offset": 0, 00:13:59.462 "data_size": 63488 00:13:59.462 }, 00:13:59.462 { 00:13:59.462 "name": "BaseBdev3", 00:13:59.462 "uuid": "e1b5f541-dad3-4849-a510-f4f2cfcf087f", 00:13:59.462 "is_configured": true, 00:13:59.462 "data_offset": 2048, 00:13:59.462 "data_size": 63488 00:13:59.462 } 00:13:59.462 ] 00:13:59.462 }' 00:13:59.462 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.462 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.722 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.723 [2024-11-19 12:34:04.883703] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:59.723 BaseBdev1 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.723 [ 00:13:59.723 { 00:13:59.723 "name": "BaseBdev1", 00:13:59.723 "aliases": [ 00:13:59.723 "4500e47e-5139-419f-ab1b-bf19728f95a9" 00:13:59.723 ], 00:13:59.723 "product_name": "Malloc disk", 00:13:59.723 "block_size": 512, 00:13:59.723 "num_blocks": 65536, 00:13:59.723 "uuid": "4500e47e-5139-419f-ab1b-bf19728f95a9", 00:13:59.723 "assigned_rate_limits": { 00:13:59.723 "rw_ios_per_sec": 0, 00:13:59.723 "rw_mbytes_per_sec": 0, 00:13:59.723 "r_mbytes_per_sec": 0, 00:13:59.723 "w_mbytes_per_sec": 0 00:13:59.723 }, 00:13:59.723 "claimed": true, 00:13:59.723 "claim_type": "exclusive_write", 00:13:59.723 "zoned": false, 00:13:59.723 "supported_io_types": { 00:13:59.723 "read": true, 00:13:59.723 "write": true, 00:13:59.723 "unmap": true, 00:13:59.723 "flush": true, 00:13:59.723 "reset": true, 00:13:59.723 "nvme_admin": false, 00:13:59.723 "nvme_io": false, 00:13:59.723 "nvme_io_md": false, 00:13:59.723 "write_zeroes": true, 00:13:59.723 "zcopy": true, 00:13:59.723 "get_zone_info": false, 00:13:59.723 "zone_management": false, 00:13:59.723 "zone_append": false, 00:13:59.723 "compare": false, 00:13:59.723 "compare_and_write": false, 00:13:59.723 "abort": true, 00:13:59.723 "seek_hole": false, 00:13:59.723 "seek_data": false, 00:13:59.723 "copy": true, 00:13:59.723 "nvme_iov_md": false 00:13:59.723 }, 00:13:59.723 "memory_domains": [ 00:13:59.723 { 00:13:59.723 "dma_device_id": "system", 00:13:59.723 "dma_device_type": 1 00:13:59.723 }, 00:13:59.723 { 00:13:59.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.723 "dma_device_type": 2 00:13:59.723 } 00:13:59.723 ], 00:13:59.723 "driver_specific": {} 00:13:59.723 } 00:13:59.723 ] 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.723 "name": "Existed_Raid", 00:13:59.723 "uuid": "87b89c21-6e11-47aa-ac9a-4dae70cdf165", 00:13:59.723 "strip_size_kb": 64, 00:13:59.723 "state": "configuring", 00:13:59.723 "raid_level": "raid5f", 00:13:59.723 "superblock": true, 00:13:59.723 "num_base_bdevs": 3, 00:13:59.723 "num_base_bdevs_discovered": 2, 00:13:59.723 "num_base_bdevs_operational": 3, 00:13:59.723 "base_bdevs_list": [ 00:13:59.723 { 00:13:59.723 "name": "BaseBdev1", 00:13:59.723 "uuid": "4500e47e-5139-419f-ab1b-bf19728f95a9", 00:13:59.723 "is_configured": true, 00:13:59.723 "data_offset": 2048, 00:13:59.723 "data_size": 63488 00:13:59.723 }, 00:13:59.723 { 00:13:59.723 "name": null, 00:13:59.723 "uuid": "bb809f45-f2a4-4a18-916a-bf1447684d95", 00:13:59.723 "is_configured": false, 00:13:59.723 "data_offset": 0, 00:13:59.723 "data_size": 63488 00:13:59.723 }, 00:13:59.723 { 00:13:59.723 "name": "BaseBdev3", 00:13:59.723 "uuid": "e1b5f541-dad3-4849-a510-f4f2cfcf087f", 00:13:59.723 "is_configured": true, 00:13:59.723 "data_offset": 2048, 00:13:59.723 "data_size": 63488 00:13:59.723 } 00:13:59.723 ] 00:13:59.723 }' 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.723 12:34:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.294 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:00.294 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.294 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.294 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.294 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.294 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:00.294 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:00.294 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.294 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.294 [2024-11-19 12:34:05.402865] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:00.294 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.294 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:00.294 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.294 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.294 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.294 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.294 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.294 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.294 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.294 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.294 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.294 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.294 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.294 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.294 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.294 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.294 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.294 "name": "Existed_Raid", 00:14:00.294 "uuid": "87b89c21-6e11-47aa-ac9a-4dae70cdf165", 00:14:00.294 "strip_size_kb": 64, 00:14:00.294 "state": "configuring", 00:14:00.294 "raid_level": "raid5f", 00:14:00.294 "superblock": true, 00:14:00.294 "num_base_bdevs": 3, 00:14:00.294 "num_base_bdevs_discovered": 1, 00:14:00.294 "num_base_bdevs_operational": 3, 00:14:00.294 "base_bdevs_list": [ 00:14:00.294 { 00:14:00.294 "name": "BaseBdev1", 00:14:00.294 "uuid": "4500e47e-5139-419f-ab1b-bf19728f95a9", 00:14:00.294 "is_configured": true, 00:14:00.294 "data_offset": 2048, 00:14:00.295 "data_size": 63488 00:14:00.295 }, 00:14:00.295 { 00:14:00.295 "name": null, 00:14:00.295 "uuid": "bb809f45-f2a4-4a18-916a-bf1447684d95", 00:14:00.295 "is_configured": false, 00:14:00.295 "data_offset": 0, 00:14:00.295 "data_size": 63488 00:14:00.295 }, 00:14:00.295 { 00:14:00.295 "name": null, 00:14:00.295 "uuid": "e1b5f541-dad3-4849-a510-f4f2cfcf087f", 00:14:00.295 "is_configured": false, 00:14:00.295 "data_offset": 0, 00:14:00.295 "data_size": 63488 00:14:00.295 } 00:14:00.295 ] 00:14:00.295 }' 00:14:00.295 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.295 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.866 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.866 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:00.866 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.866 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.866 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.866 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:00.866 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:00.866 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.866 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.866 [2024-11-19 12:34:05.902711] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:00.866 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.866 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:00.866 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.866 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.866 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.866 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.866 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.866 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.866 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.866 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.866 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.866 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.866 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.866 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.866 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.866 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.866 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.866 "name": "Existed_Raid", 00:14:00.866 "uuid": "87b89c21-6e11-47aa-ac9a-4dae70cdf165", 00:14:00.866 "strip_size_kb": 64, 00:14:00.866 "state": "configuring", 00:14:00.866 "raid_level": "raid5f", 00:14:00.866 "superblock": true, 00:14:00.866 "num_base_bdevs": 3, 00:14:00.866 "num_base_bdevs_discovered": 2, 00:14:00.866 "num_base_bdevs_operational": 3, 00:14:00.866 "base_bdevs_list": [ 00:14:00.866 { 00:14:00.866 "name": "BaseBdev1", 00:14:00.866 "uuid": "4500e47e-5139-419f-ab1b-bf19728f95a9", 00:14:00.866 "is_configured": true, 00:14:00.866 "data_offset": 2048, 00:14:00.866 "data_size": 63488 00:14:00.866 }, 00:14:00.866 { 00:14:00.866 "name": null, 00:14:00.866 "uuid": "bb809f45-f2a4-4a18-916a-bf1447684d95", 00:14:00.866 "is_configured": false, 00:14:00.866 "data_offset": 0, 00:14:00.866 "data_size": 63488 00:14:00.866 }, 00:14:00.866 { 00:14:00.866 "name": "BaseBdev3", 00:14:00.866 "uuid": "e1b5f541-dad3-4849-a510-f4f2cfcf087f", 00:14:00.866 "is_configured": true, 00:14:00.866 "data_offset": 2048, 00:14:00.866 "data_size": 63488 00:14:00.866 } 00:14:00.866 ] 00:14:00.866 }' 00:14:00.866 12:34:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.866 12:34:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.127 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.127 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:01.127 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.127 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.127 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.127 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:01.127 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:01.127 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.127 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.127 [2024-11-19 12:34:06.377924] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:01.387 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.387 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:01.387 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.387 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.387 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.387 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.387 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.387 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.387 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.387 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.387 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.387 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.387 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.387 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.387 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.387 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.387 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.387 "name": "Existed_Raid", 00:14:01.387 "uuid": "87b89c21-6e11-47aa-ac9a-4dae70cdf165", 00:14:01.387 "strip_size_kb": 64, 00:14:01.387 "state": "configuring", 00:14:01.387 "raid_level": "raid5f", 00:14:01.387 "superblock": true, 00:14:01.387 "num_base_bdevs": 3, 00:14:01.387 "num_base_bdevs_discovered": 1, 00:14:01.387 "num_base_bdevs_operational": 3, 00:14:01.387 "base_bdevs_list": [ 00:14:01.387 { 00:14:01.387 "name": null, 00:14:01.387 "uuid": "4500e47e-5139-419f-ab1b-bf19728f95a9", 00:14:01.387 "is_configured": false, 00:14:01.387 "data_offset": 0, 00:14:01.387 "data_size": 63488 00:14:01.387 }, 00:14:01.387 { 00:14:01.387 "name": null, 00:14:01.387 "uuid": "bb809f45-f2a4-4a18-916a-bf1447684d95", 00:14:01.387 "is_configured": false, 00:14:01.387 "data_offset": 0, 00:14:01.387 "data_size": 63488 00:14:01.387 }, 00:14:01.387 { 00:14:01.387 "name": "BaseBdev3", 00:14:01.387 "uuid": "e1b5f541-dad3-4849-a510-f4f2cfcf087f", 00:14:01.387 "is_configured": true, 00:14:01.387 "data_offset": 2048, 00:14:01.387 "data_size": 63488 00:14:01.387 } 00:14:01.387 ] 00:14:01.387 }' 00:14:01.387 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.387 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.647 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:01.647 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.647 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.647 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.647 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.647 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:01.647 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:01.647 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.647 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.647 [2024-11-19 12:34:06.841428] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:01.647 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.647 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:01.647 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.647 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.647 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.647 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.647 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.647 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.647 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.647 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.647 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.647 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.647 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.647 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.647 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.647 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.647 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.647 "name": "Existed_Raid", 00:14:01.647 "uuid": "87b89c21-6e11-47aa-ac9a-4dae70cdf165", 00:14:01.647 "strip_size_kb": 64, 00:14:01.647 "state": "configuring", 00:14:01.647 "raid_level": "raid5f", 00:14:01.647 "superblock": true, 00:14:01.647 "num_base_bdevs": 3, 00:14:01.647 "num_base_bdevs_discovered": 2, 00:14:01.647 "num_base_bdevs_operational": 3, 00:14:01.647 "base_bdevs_list": [ 00:14:01.647 { 00:14:01.647 "name": null, 00:14:01.647 "uuid": "4500e47e-5139-419f-ab1b-bf19728f95a9", 00:14:01.647 "is_configured": false, 00:14:01.647 "data_offset": 0, 00:14:01.647 "data_size": 63488 00:14:01.647 }, 00:14:01.647 { 00:14:01.647 "name": "BaseBdev2", 00:14:01.647 "uuid": "bb809f45-f2a4-4a18-916a-bf1447684d95", 00:14:01.647 "is_configured": true, 00:14:01.647 "data_offset": 2048, 00:14:01.647 "data_size": 63488 00:14:01.647 }, 00:14:01.647 { 00:14:01.647 "name": "BaseBdev3", 00:14:01.647 "uuid": "e1b5f541-dad3-4849-a510-f4f2cfcf087f", 00:14:01.647 "is_configured": true, 00:14:01.647 "data_offset": 2048, 00:14:01.647 "data_size": 63488 00:14:01.647 } 00:14:01.647 ] 00:14:01.647 }' 00:14:01.647 12:34:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.647 12:34:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4500e47e-5139-419f-ab1b-bf19728f95a9 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.217 [2024-11-19 12:34:07.373553] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:02.217 [2024-11-19 12:34:07.373870] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:02.217 [2024-11-19 12:34:07.373923] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:02.217 [2024-11-19 12:34:07.374288] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:02.217 NewBaseBdev 00:14:02.217 [2024-11-19 12:34:07.374855] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:02.217 [2024-11-19 12:34:07.374872] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:14:02.217 [2024-11-19 12:34:07.375004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.217 [ 00:14:02.217 { 00:14:02.217 "name": "NewBaseBdev", 00:14:02.217 "aliases": [ 00:14:02.217 "4500e47e-5139-419f-ab1b-bf19728f95a9" 00:14:02.217 ], 00:14:02.217 "product_name": "Malloc disk", 00:14:02.217 "block_size": 512, 00:14:02.217 "num_blocks": 65536, 00:14:02.217 "uuid": "4500e47e-5139-419f-ab1b-bf19728f95a9", 00:14:02.217 "assigned_rate_limits": { 00:14:02.217 "rw_ios_per_sec": 0, 00:14:02.217 "rw_mbytes_per_sec": 0, 00:14:02.217 "r_mbytes_per_sec": 0, 00:14:02.217 "w_mbytes_per_sec": 0 00:14:02.217 }, 00:14:02.217 "claimed": true, 00:14:02.217 "claim_type": "exclusive_write", 00:14:02.217 "zoned": false, 00:14:02.217 "supported_io_types": { 00:14:02.217 "read": true, 00:14:02.217 "write": true, 00:14:02.217 "unmap": true, 00:14:02.217 "flush": true, 00:14:02.217 "reset": true, 00:14:02.217 "nvme_admin": false, 00:14:02.217 "nvme_io": false, 00:14:02.217 "nvme_io_md": false, 00:14:02.217 "write_zeroes": true, 00:14:02.217 "zcopy": true, 00:14:02.217 "get_zone_info": false, 00:14:02.217 "zone_management": false, 00:14:02.217 "zone_append": false, 00:14:02.217 "compare": false, 00:14:02.217 "compare_and_write": false, 00:14:02.217 "abort": true, 00:14:02.217 "seek_hole": false, 00:14:02.217 "seek_data": false, 00:14:02.217 "copy": true, 00:14:02.217 "nvme_iov_md": false 00:14:02.217 }, 00:14:02.217 "memory_domains": [ 00:14:02.217 { 00:14:02.217 "dma_device_id": "system", 00:14:02.217 "dma_device_type": 1 00:14:02.217 }, 00:14:02.217 { 00:14:02.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.217 "dma_device_type": 2 00:14:02.217 } 00:14:02.217 ], 00:14:02.217 "driver_specific": {} 00:14:02.217 } 00:14:02.217 ] 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:02.217 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.218 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:02.218 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.218 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:02.218 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.218 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.218 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.218 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.218 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.218 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.218 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.218 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.218 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.218 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.218 "name": "Existed_Raid", 00:14:02.218 "uuid": "87b89c21-6e11-47aa-ac9a-4dae70cdf165", 00:14:02.218 "strip_size_kb": 64, 00:14:02.218 "state": "online", 00:14:02.218 "raid_level": "raid5f", 00:14:02.218 "superblock": true, 00:14:02.218 "num_base_bdevs": 3, 00:14:02.218 "num_base_bdevs_discovered": 3, 00:14:02.218 "num_base_bdevs_operational": 3, 00:14:02.218 "base_bdevs_list": [ 00:14:02.218 { 00:14:02.218 "name": "NewBaseBdev", 00:14:02.218 "uuid": "4500e47e-5139-419f-ab1b-bf19728f95a9", 00:14:02.218 "is_configured": true, 00:14:02.218 "data_offset": 2048, 00:14:02.218 "data_size": 63488 00:14:02.218 }, 00:14:02.218 { 00:14:02.218 "name": "BaseBdev2", 00:14:02.218 "uuid": "bb809f45-f2a4-4a18-916a-bf1447684d95", 00:14:02.218 "is_configured": true, 00:14:02.218 "data_offset": 2048, 00:14:02.218 "data_size": 63488 00:14:02.218 }, 00:14:02.218 { 00:14:02.218 "name": "BaseBdev3", 00:14:02.218 "uuid": "e1b5f541-dad3-4849-a510-f4f2cfcf087f", 00:14:02.218 "is_configured": true, 00:14:02.218 "data_offset": 2048, 00:14:02.218 "data_size": 63488 00:14:02.218 } 00:14:02.218 ] 00:14:02.218 }' 00:14:02.218 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.218 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.788 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:02.788 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:02.788 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:02.788 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:02.788 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:02.788 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:02.788 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:02.788 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.788 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.788 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:02.788 [2024-11-19 12:34:07.849087] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:02.788 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.788 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:02.788 "name": "Existed_Raid", 00:14:02.788 "aliases": [ 00:14:02.788 "87b89c21-6e11-47aa-ac9a-4dae70cdf165" 00:14:02.788 ], 00:14:02.788 "product_name": "Raid Volume", 00:14:02.788 "block_size": 512, 00:14:02.788 "num_blocks": 126976, 00:14:02.788 "uuid": "87b89c21-6e11-47aa-ac9a-4dae70cdf165", 00:14:02.788 "assigned_rate_limits": { 00:14:02.788 "rw_ios_per_sec": 0, 00:14:02.788 "rw_mbytes_per_sec": 0, 00:14:02.788 "r_mbytes_per_sec": 0, 00:14:02.788 "w_mbytes_per_sec": 0 00:14:02.788 }, 00:14:02.788 "claimed": false, 00:14:02.788 "zoned": false, 00:14:02.788 "supported_io_types": { 00:14:02.788 "read": true, 00:14:02.788 "write": true, 00:14:02.788 "unmap": false, 00:14:02.788 "flush": false, 00:14:02.788 "reset": true, 00:14:02.788 "nvme_admin": false, 00:14:02.788 "nvme_io": false, 00:14:02.788 "nvme_io_md": false, 00:14:02.788 "write_zeroes": true, 00:14:02.788 "zcopy": false, 00:14:02.788 "get_zone_info": false, 00:14:02.788 "zone_management": false, 00:14:02.788 "zone_append": false, 00:14:02.788 "compare": false, 00:14:02.788 "compare_and_write": false, 00:14:02.788 "abort": false, 00:14:02.788 "seek_hole": false, 00:14:02.788 "seek_data": false, 00:14:02.788 "copy": false, 00:14:02.788 "nvme_iov_md": false 00:14:02.788 }, 00:14:02.788 "driver_specific": { 00:14:02.788 "raid": { 00:14:02.788 "uuid": "87b89c21-6e11-47aa-ac9a-4dae70cdf165", 00:14:02.788 "strip_size_kb": 64, 00:14:02.788 "state": "online", 00:14:02.788 "raid_level": "raid5f", 00:14:02.788 "superblock": true, 00:14:02.788 "num_base_bdevs": 3, 00:14:02.788 "num_base_bdevs_discovered": 3, 00:14:02.788 "num_base_bdevs_operational": 3, 00:14:02.788 "base_bdevs_list": [ 00:14:02.788 { 00:14:02.788 "name": "NewBaseBdev", 00:14:02.788 "uuid": "4500e47e-5139-419f-ab1b-bf19728f95a9", 00:14:02.788 "is_configured": true, 00:14:02.788 "data_offset": 2048, 00:14:02.788 "data_size": 63488 00:14:02.788 }, 00:14:02.788 { 00:14:02.788 "name": "BaseBdev2", 00:14:02.788 "uuid": "bb809f45-f2a4-4a18-916a-bf1447684d95", 00:14:02.788 "is_configured": true, 00:14:02.788 "data_offset": 2048, 00:14:02.788 "data_size": 63488 00:14:02.788 }, 00:14:02.788 { 00:14:02.788 "name": "BaseBdev3", 00:14:02.788 "uuid": "e1b5f541-dad3-4849-a510-f4f2cfcf087f", 00:14:02.788 "is_configured": true, 00:14:02.788 "data_offset": 2048, 00:14:02.788 "data_size": 63488 00:14:02.788 } 00:14:02.788 ] 00:14:02.788 } 00:14:02.788 } 00:14:02.788 }' 00:14:02.788 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:02.788 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:02.788 BaseBdev2 00:14:02.788 BaseBdev3' 00:14:02.788 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.788 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:02.788 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.788 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:02.788 12:34:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.788 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.788 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.788 12:34:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.788 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.788 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.788 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.788 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:02.788 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.789 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.789 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.789 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.052 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.052 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.052 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.052 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.052 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:03.052 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.052 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.052 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.052 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.052 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.052 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:03.052 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.052 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.052 [2024-11-19 12:34:08.092390] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:03.052 [2024-11-19 12:34:08.092485] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:03.052 [2024-11-19 12:34:08.092581] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:03.052 [2024-11-19 12:34:08.092884] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:03.052 [2024-11-19 12:34:08.092918] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:14:03.052 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.052 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 91273 00:14:03.052 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 91273 ']' 00:14:03.052 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 91273 00:14:03.052 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:03.052 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:03.052 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91273 00:14:03.052 killing process with pid 91273 00:14:03.052 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:03.052 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:03.052 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91273' 00:14:03.052 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 91273 00:14:03.052 [2024-11-19 12:34:08.142008] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:03.052 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 91273 00:14:03.052 [2024-11-19 12:34:08.200506] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:03.652 12:34:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:03.652 00:14:03.652 real 0m8.936s 00:14:03.652 user 0m14.802s 00:14:03.652 sys 0m2.005s 00:14:03.652 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:03.652 12:34:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.652 ************************************ 00:14:03.652 END TEST raid5f_state_function_test_sb 00:14:03.652 ************************************ 00:14:03.652 12:34:08 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:14:03.652 12:34:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:03.652 12:34:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:03.652 12:34:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:03.652 ************************************ 00:14:03.652 START TEST raid5f_superblock_test 00:14:03.652 ************************************ 00:14:03.652 12:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:14:03.652 12:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:03.652 12:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:03.652 12:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:03.652 12:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:03.652 12:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:03.652 12:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:03.652 12:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:03.652 12:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:03.652 12:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:03.652 12:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:03.652 12:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:03.652 12:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:03.652 12:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:03.652 12:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:03.652 12:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:03.652 12:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:03.652 12:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=91872 00:14:03.652 12:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:03.652 12:34:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 91872 00:14:03.652 12:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 91872 ']' 00:14:03.652 12:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.652 12:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:03.652 12:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.652 12:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:03.652 12:34:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.652 [2024-11-19 12:34:08.777874] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:03.652 [2024-11-19 12:34:08.778646] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91872 ] 00:14:03.912 [2024-11-19 12:34:08.939405] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.912 [2024-11-19 12:34:09.008275] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.912 [2024-11-19 12:34:09.084632] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:03.912 [2024-11-19 12:34:09.084831] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.482 malloc1 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.482 [2024-11-19 12:34:09.643633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:04.482 [2024-11-19 12:34:09.643757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.482 [2024-11-19 12:34:09.643792] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:04.482 [2024-11-19 12:34:09.643815] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.482 [2024-11-19 12:34:09.646294] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.482 [2024-11-19 12:34:09.646340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:04.482 pt1 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.482 malloc2 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.482 [2024-11-19 12:34:09.691221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:04.482 [2024-11-19 12:34:09.691475] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.482 [2024-11-19 12:34:09.691569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:04.482 [2024-11-19 12:34:09.691659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.482 [2024-11-19 12:34:09.696699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.482 [2024-11-19 12:34:09.696899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:04.482 pt2 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:04.482 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:04.483 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:04.483 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:04.483 12:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.483 12:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.483 malloc3 00:14:04.483 12:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.483 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:04.483 12:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.483 12:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.483 [2024-11-19 12:34:09.733393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:04.483 [2024-11-19 12:34:09.733546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.483 [2024-11-19 12:34:09.733588] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:04.483 [2024-11-19 12:34:09.733631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.483 [2024-11-19 12:34:09.736120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.483 [2024-11-19 12:34:09.736230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:04.743 pt3 00:14:04.743 12:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.743 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:04.743 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:04.743 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:04.743 12:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.743 12:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.743 [2024-11-19 12:34:09.745448] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:04.743 [2024-11-19 12:34:09.747630] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:04.743 [2024-11-19 12:34:09.747706] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:04.743 [2024-11-19 12:34:09.747895] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:04.743 [2024-11-19 12:34:09.747909] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:04.743 [2024-11-19 12:34:09.748199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:14:04.743 [2024-11-19 12:34:09.748670] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:04.743 [2024-11-19 12:34:09.748688] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:04.743 [2024-11-19 12:34:09.748845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.743 12:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.743 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:04.743 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.743 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.743 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:04.743 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.743 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.743 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.743 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.743 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.743 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.743 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.743 12:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.743 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.743 12:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.743 12:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.743 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.743 "name": "raid_bdev1", 00:14:04.743 "uuid": "a313f2ff-a809-481e-a14e-8322ad3e0d96", 00:14:04.743 "strip_size_kb": 64, 00:14:04.743 "state": "online", 00:14:04.743 "raid_level": "raid5f", 00:14:04.743 "superblock": true, 00:14:04.743 "num_base_bdevs": 3, 00:14:04.743 "num_base_bdevs_discovered": 3, 00:14:04.743 "num_base_bdevs_operational": 3, 00:14:04.743 "base_bdevs_list": [ 00:14:04.743 { 00:14:04.743 "name": "pt1", 00:14:04.743 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:04.743 "is_configured": true, 00:14:04.743 "data_offset": 2048, 00:14:04.743 "data_size": 63488 00:14:04.743 }, 00:14:04.743 { 00:14:04.743 "name": "pt2", 00:14:04.743 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:04.743 "is_configured": true, 00:14:04.743 "data_offset": 2048, 00:14:04.743 "data_size": 63488 00:14:04.743 }, 00:14:04.743 { 00:14:04.743 "name": "pt3", 00:14:04.743 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:04.743 "is_configured": true, 00:14:04.743 "data_offset": 2048, 00:14:04.743 "data_size": 63488 00:14:04.743 } 00:14:04.743 ] 00:14:04.743 }' 00:14:04.743 12:34:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.743 12:34:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.003 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:05.003 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:05.003 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:05.003 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:05.003 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:05.003 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:05.003 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:05.003 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:05.003 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.003 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.004 [2024-11-19 12:34:10.195126] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:05.004 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.004 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:05.004 "name": "raid_bdev1", 00:14:05.004 "aliases": [ 00:14:05.004 "a313f2ff-a809-481e-a14e-8322ad3e0d96" 00:14:05.004 ], 00:14:05.004 "product_name": "Raid Volume", 00:14:05.004 "block_size": 512, 00:14:05.004 "num_blocks": 126976, 00:14:05.004 "uuid": "a313f2ff-a809-481e-a14e-8322ad3e0d96", 00:14:05.004 "assigned_rate_limits": { 00:14:05.004 "rw_ios_per_sec": 0, 00:14:05.004 "rw_mbytes_per_sec": 0, 00:14:05.004 "r_mbytes_per_sec": 0, 00:14:05.004 "w_mbytes_per_sec": 0 00:14:05.004 }, 00:14:05.004 "claimed": false, 00:14:05.004 "zoned": false, 00:14:05.004 "supported_io_types": { 00:14:05.004 "read": true, 00:14:05.004 "write": true, 00:14:05.004 "unmap": false, 00:14:05.004 "flush": false, 00:14:05.004 "reset": true, 00:14:05.004 "nvme_admin": false, 00:14:05.004 "nvme_io": false, 00:14:05.004 "nvme_io_md": false, 00:14:05.004 "write_zeroes": true, 00:14:05.004 "zcopy": false, 00:14:05.004 "get_zone_info": false, 00:14:05.004 "zone_management": false, 00:14:05.004 "zone_append": false, 00:14:05.004 "compare": false, 00:14:05.004 "compare_and_write": false, 00:14:05.004 "abort": false, 00:14:05.004 "seek_hole": false, 00:14:05.004 "seek_data": false, 00:14:05.004 "copy": false, 00:14:05.004 "nvme_iov_md": false 00:14:05.004 }, 00:14:05.004 "driver_specific": { 00:14:05.004 "raid": { 00:14:05.004 "uuid": "a313f2ff-a809-481e-a14e-8322ad3e0d96", 00:14:05.004 "strip_size_kb": 64, 00:14:05.004 "state": "online", 00:14:05.004 "raid_level": "raid5f", 00:14:05.004 "superblock": true, 00:14:05.004 "num_base_bdevs": 3, 00:14:05.004 "num_base_bdevs_discovered": 3, 00:14:05.004 "num_base_bdevs_operational": 3, 00:14:05.004 "base_bdevs_list": [ 00:14:05.004 { 00:14:05.004 "name": "pt1", 00:14:05.004 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:05.004 "is_configured": true, 00:14:05.004 "data_offset": 2048, 00:14:05.004 "data_size": 63488 00:14:05.004 }, 00:14:05.004 { 00:14:05.004 "name": "pt2", 00:14:05.004 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:05.004 "is_configured": true, 00:14:05.004 "data_offset": 2048, 00:14:05.004 "data_size": 63488 00:14:05.004 }, 00:14:05.004 { 00:14:05.004 "name": "pt3", 00:14:05.004 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:05.004 "is_configured": true, 00:14:05.004 "data_offset": 2048, 00:14:05.004 "data_size": 63488 00:14:05.004 } 00:14:05.004 ] 00:14:05.004 } 00:14:05.004 } 00:14:05.004 }' 00:14:05.004 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:05.264 pt2 00:14:05.264 pt3' 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.264 [2024-11-19 12:34:10.471017] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a313f2ff-a809-481e-a14e-8322ad3e0d96 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a313f2ff-a809-481e-a14e-8322ad3e0d96 ']' 00:14:05.264 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:05.265 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.265 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.265 [2024-11-19 12:34:10.514837] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:05.265 [2024-11-19 12:34:10.514867] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:05.265 [2024-11-19 12:34:10.514976] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:05.265 [2024-11-19 12:34:10.515070] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:05.265 [2024-11-19 12:34:10.515088] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:05.265 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.525 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.525 [2024-11-19 12:34:10.662903] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:05.525 [2024-11-19 12:34:10.665211] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:05.525 [2024-11-19 12:34:10.665311] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:05.525 [2024-11-19 12:34:10.665410] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:05.525 [2024-11-19 12:34:10.665513] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:05.525 [2024-11-19 12:34:10.665582] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:05.526 [2024-11-19 12:34:10.665659] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:05.526 [2024-11-19 12:34:10.665709] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:14:05.526 request: 00:14:05.526 { 00:14:05.526 "name": "raid_bdev1", 00:14:05.526 "raid_level": "raid5f", 00:14:05.526 "base_bdevs": [ 00:14:05.526 "malloc1", 00:14:05.526 "malloc2", 00:14:05.526 "malloc3" 00:14:05.526 ], 00:14:05.526 "strip_size_kb": 64, 00:14:05.526 "superblock": false, 00:14:05.526 "method": "bdev_raid_create", 00:14:05.526 "req_id": 1 00:14:05.526 } 00:14:05.526 Got JSON-RPC error response 00:14:05.526 response: 00:14:05.526 { 00:14:05.526 "code": -17, 00:14:05.526 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:05.526 } 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.526 [2024-11-19 12:34:10.710840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:05.526 [2024-11-19 12:34:10.710899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.526 [2024-11-19 12:34:10.710921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:05.526 [2024-11-19 12:34:10.710934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.526 [2024-11-19 12:34:10.713399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.526 [2024-11-19 12:34:10.713442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:05.526 [2024-11-19 12:34:10.713520] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:05.526 [2024-11-19 12:34:10.713563] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:05.526 pt1 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.526 "name": "raid_bdev1", 00:14:05.526 "uuid": "a313f2ff-a809-481e-a14e-8322ad3e0d96", 00:14:05.526 "strip_size_kb": 64, 00:14:05.526 "state": "configuring", 00:14:05.526 "raid_level": "raid5f", 00:14:05.526 "superblock": true, 00:14:05.526 "num_base_bdevs": 3, 00:14:05.526 "num_base_bdevs_discovered": 1, 00:14:05.526 "num_base_bdevs_operational": 3, 00:14:05.526 "base_bdevs_list": [ 00:14:05.526 { 00:14:05.526 "name": "pt1", 00:14:05.526 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:05.526 "is_configured": true, 00:14:05.526 "data_offset": 2048, 00:14:05.526 "data_size": 63488 00:14:05.526 }, 00:14:05.526 { 00:14:05.526 "name": null, 00:14:05.526 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:05.526 "is_configured": false, 00:14:05.526 "data_offset": 2048, 00:14:05.526 "data_size": 63488 00:14:05.526 }, 00:14:05.526 { 00:14:05.526 "name": null, 00:14:05.526 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:05.526 "is_configured": false, 00:14:05.526 "data_offset": 2048, 00:14:05.526 "data_size": 63488 00:14:05.526 } 00:14:05.526 ] 00:14:05.526 }' 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.526 12:34:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.097 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:06.097 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:06.097 12:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.097 12:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.097 [2024-11-19 12:34:11.126917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:06.097 [2024-11-19 12:34:11.127081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.097 [2024-11-19 12:34:11.127127] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:06.097 [2024-11-19 12:34:11.127172] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.097 [2024-11-19 12:34:11.127701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.097 [2024-11-19 12:34:11.127799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:06.097 [2024-11-19 12:34:11.127937] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:06.097 [2024-11-19 12:34:11.128005] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:06.097 pt2 00:14:06.097 12:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.097 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:06.097 12:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.097 12:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.097 [2024-11-19 12:34:11.138905] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:06.097 12:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.097 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:06.097 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.097 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.097 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:06.097 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.097 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.097 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.097 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.097 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.097 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.097 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.097 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.097 12:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.097 12:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.097 12:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.097 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.097 "name": "raid_bdev1", 00:14:06.097 "uuid": "a313f2ff-a809-481e-a14e-8322ad3e0d96", 00:14:06.097 "strip_size_kb": 64, 00:14:06.097 "state": "configuring", 00:14:06.097 "raid_level": "raid5f", 00:14:06.097 "superblock": true, 00:14:06.097 "num_base_bdevs": 3, 00:14:06.097 "num_base_bdevs_discovered": 1, 00:14:06.097 "num_base_bdevs_operational": 3, 00:14:06.097 "base_bdevs_list": [ 00:14:06.097 { 00:14:06.097 "name": "pt1", 00:14:06.097 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:06.097 "is_configured": true, 00:14:06.097 "data_offset": 2048, 00:14:06.097 "data_size": 63488 00:14:06.097 }, 00:14:06.097 { 00:14:06.097 "name": null, 00:14:06.097 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:06.097 "is_configured": false, 00:14:06.097 "data_offset": 0, 00:14:06.097 "data_size": 63488 00:14:06.097 }, 00:14:06.097 { 00:14:06.097 "name": null, 00:14:06.097 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:06.097 "is_configured": false, 00:14:06.097 "data_offset": 2048, 00:14:06.097 "data_size": 63488 00:14:06.097 } 00:14:06.097 ] 00:14:06.097 }' 00:14:06.097 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.097 12:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.357 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:06.357 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:06.357 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:06.357 12:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.357 12:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.357 [2024-11-19 12:34:11.598857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:06.357 [2024-11-19 12:34:11.598928] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.357 [2024-11-19 12:34:11.598955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:06.357 [2024-11-19 12:34:11.598966] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.357 [2024-11-19 12:34:11.599441] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.357 [2024-11-19 12:34:11.599476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:06.357 [2024-11-19 12:34:11.599566] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:06.357 [2024-11-19 12:34:11.599595] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:06.357 pt2 00:14:06.357 12:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.357 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:06.357 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:06.357 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:06.357 12:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.357 12:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.357 [2024-11-19 12:34:11.610841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:06.357 [2024-11-19 12:34:11.610894] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.357 [2024-11-19 12:34:11.610935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:06.358 [2024-11-19 12:34:11.610945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.358 [2024-11-19 12:34:11.611342] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.358 [2024-11-19 12:34:11.611369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:06.358 [2024-11-19 12:34:11.611437] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:06.358 [2024-11-19 12:34:11.611467] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:06.358 [2024-11-19 12:34:11.611595] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:14:06.358 [2024-11-19 12:34:11.611613] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:06.358 [2024-11-19 12:34:11.611900] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:06.358 [2024-11-19 12:34:11.612353] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:14:06.358 [2024-11-19 12:34:11.612377] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:14:06.358 [2024-11-19 12:34:11.612493] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.617 pt3 00:14:06.617 12:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.617 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:06.617 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:06.617 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:06.617 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.617 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.617 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:06.617 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.617 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.617 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.617 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.617 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.617 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.617 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.617 12:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.617 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.617 12:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.617 12:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.617 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.617 "name": "raid_bdev1", 00:14:06.617 "uuid": "a313f2ff-a809-481e-a14e-8322ad3e0d96", 00:14:06.617 "strip_size_kb": 64, 00:14:06.617 "state": "online", 00:14:06.617 "raid_level": "raid5f", 00:14:06.617 "superblock": true, 00:14:06.617 "num_base_bdevs": 3, 00:14:06.617 "num_base_bdevs_discovered": 3, 00:14:06.617 "num_base_bdevs_operational": 3, 00:14:06.617 "base_bdevs_list": [ 00:14:06.617 { 00:14:06.617 "name": "pt1", 00:14:06.617 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:06.617 "is_configured": true, 00:14:06.617 "data_offset": 2048, 00:14:06.617 "data_size": 63488 00:14:06.617 }, 00:14:06.617 { 00:14:06.617 "name": "pt2", 00:14:06.617 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:06.617 "is_configured": true, 00:14:06.618 "data_offset": 2048, 00:14:06.618 "data_size": 63488 00:14:06.618 }, 00:14:06.618 { 00:14:06.618 "name": "pt3", 00:14:06.618 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:06.618 "is_configured": true, 00:14:06.618 "data_offset": 2048, 00:14:06.618 "data_size": 63488 00:14:06.618 } 00:14:06.618 ] 00:14:06.618 }' 00:14:06.618 12:34:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.618 12:34:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.877 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:06.877 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:06.877 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:06.877 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:06.877 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:06.877 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:06.877 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:06.877 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:06.877 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.877 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.877 [2024-11-19 12:34:12.046437] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:06.877 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.877 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:06.877 "name": "raid_bdev1", 00:14:06.877 "aliases": [ 00:14:06.877 "a313f2ff-a809-481e-a14e-8322ad3e0d96" 00:14:06.877 ], 00:14:06.877 "product_name": "Raid Volume", 00:14:06.877 "block_size": 512, 00:14:06.877 "num_blocks": 126976, 00:14:06.878 "uuid": "a313f2ff-a809-481e-a14e-8322ad3e0d96", 00:14:06.878 "assigned_rate_limits": { 00:14:06.878 "rw_ios_per_sec": 0, 00:14:06.878 "rw_mbytes_per_sec": 0, 00:14:06.878 "r_mbytes_per_sec": 0, 00:14:06.878 "w_mbytes_per_sec": 0 00:14:06.878 }, 00:14:06.878 "claimed": false, 00:14:06.878 "zoned": false, 00:14:06.878 "supported_io_types": { 00:14:06.878 "read": true, 00:14:06.878 "write": true, 00:14:06.878 "unmap": false, 00:14:06.878 "flush": false, 00:14:06.878 "reset": true, 00:14:06.878 "nvme_admin": false, 00:14:06.878 "nvme_io": false, 00:14:06.878 "nvme_io_md": false, 00:14:06.878 "write_zeroes": true, 00:14:06.878 "zcopy": false, 00:14:06.878 "get_zone_info": false, 00:14:06.878 "zone_management": false, 00:14:06.878 "zone_append": false, 00:14:06.878 "compare": false, 00:14:06.878 "compare_and_write": false, 00:14:06.878 "abort": false, 00:14:06.878 "seek_hole": false, 00:14:06.878 "seek_data": false, 00:14:06.878 "copy": false, 00:14:06.878 "nvme_iov_md": false 00:14:06.878 }, 00:14:06.878 "driver_specific": { 00:14:06.878 "raid": { 00:14:06.878 "uuid": "a313f2ff-a809-481e-a14e-8322ad3e0d96", 00:14:06.878 "strip_size_kb": 64, 00:14:06.878 "state": "online", 00:14:06.878 "raid_level": "raid5f", 00:14:06.878 "superblock": true, 00:14:06.878 "num_base_bdevs": 3, 00:14:06.878 "num_base_bdevs_discovered": 3, 00:14:06.878 "num_base_bdevs_operational": 3, 00:14:06.878 "base_bdevs_list": [ 00:14:06.878 { 00:14:06.878 "name": "pt1", 00:14:06.878 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:06.878 "is_configured": true, 00:14:06.878 "data_offset": 2048, 00:14:06.878 "data_size": 63488 00:14:06.878 }, 00:14:06.878 { 00:14:06.878 "name": "pt2", 00:14:06.878 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:06.878 "is_configured": true, 00:14:06.878 "data_offset": 2048, 00:14:06.878 "data_size": 63488 00:14:06.878 }, 00:14:06.878 { 00:14:06.878 "name": "pt3", 00:14:06.878 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:06.878 "is_configured": true, 00:14:06.878 "data_offset": 2048, 00:14:06.878 "data_size": 63488 00:14:06.878 } 00:14:06.878 ] 00:14:06.878 } 00:14:06.878 } 00:14:06.878 }' 00:14:06.878 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:06.878 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:06.878 pt2 00:14:06.878 pt3' 00:14:06.878 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:07.138 [2024-11-19 12:34:12.326008] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a313f2ff-a809-481e-a14e-8322ad3e0d96 '!=' a313f2ff-a809-481e-a14e-8322ad3e0d96 ']' 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.138 [2024-11-19 12:34:12.377787] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.138 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.398 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.398 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.398 "name": "raid_bdev1", 00:14:07.398 "uuid": "a313f2ff-a809-481e-a14e-8322ad3e0d96", 00:14:07.398 "strip_size_kb": 64, 00:14:07.398 "state": "online", 00:14:07.398 "raid_level": "raid5f", 00:14:07.398 "superblock": true, 00:14:07.398 "num_base_bdevs": 3, 00:14:07.398 "num_base_bdevs_discovered": 2, 00:14:07.398 "num_base_bdevs_operational": 2, 00:14:07.398 "base_bdevs_list": [ 00:14:07.398 { 00:14:07.398 "name": null, 00:14:07.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.398 "is_configured": false, 00:14:07.398 "data_offset": 0, 00:14:07.398 "data_size": 63488 00:14:07.398 }, 00:14:07.398 { 00:14:07.398 "name": "pt2", 00:14:07.398 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:07.398 "is_configured": true, 00:14:07.398 "data_offset": 2048, 00:14:07.398 "data_size": 63488 00:14:07.398 }, 00:14:07.398 { 00:14:07.398 "name": "pt3", 00:14:07.398 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:07.398 "is_configured": true, 00:14:07.398 "data_offset": 2048, 00:14:07.398 "data_size": 63488 00:14:07.398 } 00:14:07.398 ] 00:14:07.398 }' 00:14:07.398 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.398 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.659 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:07.659 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.659 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.659 [2024-11-19 12:34:12.805014] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:07.659 [2024-11-19 12:34:12.805114] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:07.659 [2024-11-19 12:34:12.805216] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:07.659 [2024-11-19 12:34:12.805324] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:07.659 [2024-11-19 12:34:12.805388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:14:07.659 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.659 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.659 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:07.659 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.659 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.659 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.659 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:07.659 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:07.659 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:07.659 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:07.659 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:07.659 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.659 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.659 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.659 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:07.659 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:07.659 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:07.659 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.660 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.660 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.660 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:07.660 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:07.660 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:07.660 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:07.660 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:07.660 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.660 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.660 [2024-11-19 12:34:12.888853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:07.660 [2024-11-19 12:34:12.888916] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.660 [2024-11-19 12:34:12.888942] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:07.660 [2024-11-19 12:34:12.888954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.660 [2024-11-19 12:34:12.891578] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.660 [2024-11-19 12:34:12.891671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:07.660 [2024-11-19 12:34:12.891790] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:07.660 [2024-11-19 12:34:12.891837] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:07.660 pt2 00:14:07.660 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.660 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:07.660 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.660 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:07.660 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:07.660 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.660 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:07.660 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.660 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.660 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.660 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.660 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.660 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.660 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.660 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.920 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.920 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.920 "name": "raid_bdev1", 00:14:07.920 "uuid": "a313f2ff-a809-481e-a14e-8322ad3e0d96", 00:14:07.920 "strip_size_kb": 64, 00:14:07.920 "state": "configuring", 00:14:07.920 "raid_level": "raid5f", 00:14:07.920 "superblock": true, 00:14:07.920 "num_base_bdevs": 3, 00:14:07.921 "num_base_bdevs_discovered": 1, 00:14:07.921 "num_base_bdevs_operational": 2, 00:14:07.921 "base_bdevs_list": [ 00:14:07.921 { 00:14:07.921 "name": null, 00:14:07.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.921 "is_configured": false, 00:14:07.921 "data_offset": 2048, 00:14:07.921 "data_size": 63488 00:14:07.921 }, 00:14:07.921 { 00:14:07.921 "name": "pt2", 00:14:07.921 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:07.921 "is_configured": true, 00:14:07.921 "data_offset": 2048, 00:14:07.921 "data_size": 63488 00:14:07.921 }, 00:14:07.921 { 00:14:07.921 "name": null, 00:14:07.921 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:07.921 "is_configured": false, 00:14:07.921 "data_offset": 2048, 00:14:07.921 "data_size": 63488 00:14:07.921 } 00:14:07.921 ] 00:14:07.921 }' 00:14:07.921 12:34:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.921 12:34:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.181 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:08.181 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:08.181 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:14:08.181 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:08.181 12:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.181 12:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.181 [2024-11-19 12:34:13.384063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:08.181 [2024-11-19 12:34:13.384228] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.181 [2024-11-19 12:34:13.384264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:08.181 [2024-11-19 12:34:13.384276] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.181 [2024-11-19 12:34:13.384835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.181 [2024-11-19 12:34:13.384868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:08.181 [2024-11-19 12:34:13.384977] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:08.181 [2024-11-19 12:34:13.385014] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:08.181 [2024-11-19 12:34:13.385137] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:08.181 [2024-11-19 12:34:13.385156] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:08.181 [2024-11-19 12:34:13.385441] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:08.181 [2024-11-19 12:34:13.385968] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:08.181 [2024-11-19 12:34:13.385995] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:14:08.181 [2024-11-19 12:34:13.386277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.181 pt3 00:14:08.181 12:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.181 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:08.181 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.181 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.181 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:08.182 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.182 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:08.182 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.182 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.182 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.182 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.182 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.182 12:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.182 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.182 12:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.182 12:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.182 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.182 "name": "raid_bdev1", 00:14:08.182 "uuid": "a313f2ff-a809-481e-a14e-8322ad3e0d96", 00:14:08.182 "strip_size_kb": 64, 00:14:08.182 "state": "online", 00:14:08.182 "raid_level": "raid5f", 00:14:08.182 "superblock": true, 00:14:08.182 "num_base_bdevs": 3, 00:14:08.182 "num_base_bdevs_discovered": 2, 00:14:08.182 "num_base_bdevs_operational": 2, 00:14:08.182 "base_bdevs_list": [ 00:14:08.182 { 00:14:08.182 "name": null, 00:14:08.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.182 "is_configured": false, 00:14:08.182 "data_offset": 2048, 00:14:08.182 "data_size": 63488 00:14:08.182 }, 00:14:08.182 { 00:14:08.182 "name": "pt2", 00:14:08.182 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:08.182 "is_configured": true, 00:14:08.182 "data_offset": 2048, 00:14:08.182 "data_size": 63488 00:14:08.182 }, 00:14:08.182 { 00:14:08.182 "name": "pt3", 00:14:08.182 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:08.182 "is_configured": true, 00:14:08.182 "data_offset": 2048, 00:14:08.182 "data_size": 63488 00:14:08.182 } 00:14:08.182 ] 00:14:08.182 }' 00:14:08.182 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.182 12:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.752 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:08.752 12:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.752 12:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.752 [2024-11-19 12:34:13.775856] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:08.752 [2024-11-19 12:34:13.775948] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:08.752 [2024-11-19 12:34:13.776067] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:08.752 [2024-11-19 12:34:13.776147] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:08.752 [2024-11-19 12:34:13.776211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:14:08.752 12:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.752 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:08.752 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.752 12:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.752 12:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.752 12:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.752 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:08.752 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:08.753 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:14:08.753 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:14:08.753 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:14:08.753 12:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.753 12:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.753 12:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.753 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:08.753 12:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.753 12:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.753 [2024-11-19 12:34:13.843775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:08.753 [2024-11-19 12:34:13.843907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.753 [2024-11-19 12:34:13.843947] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:08.753 [2024-11-19 12:34:13.843997] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.753 [2024-11-19 12:34:13.846553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.753 [2024-11-19 12:34:13.846655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:08.753 [2024-11-19 12:34:13.846795] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:08.753 [2024-11-19 12:34:13.846888] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:08.753 [2024-11-19 12:34:13.847046] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:08.753 [2024-11-19 12:34:13.847118] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:08.753 [2024-11-19 12:34:13.847165] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:14:08.753 [2024-11-19 12:34:13.847296] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:08.753 pt1 00:14:08.753 12:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.753 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:14:08.753 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:08.753 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.753 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.753 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:08.753 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.753 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:08.753 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.753 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.753 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.753 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.753 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.753 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.753 12:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.753 12:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.753 12:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.753 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.753 "name": "raid_bdev1", 00:14:08.753 "uuid": "a313f2ff-a809-481e-a14e-8322ad3e0d96", 00:14:08.753 "strip_size_kb": 64, 00:14:08.753 "state": "configuring", 00:14:08.753 "raid_level": "raid5f", 00:14:08.753 "superblock": true, 00:14:08.753 "num_base_bdevs": 3, 00:14:08.753 "num_base_bdevs_discovered": 1, 00:14:08.753 "num_base_bdevs_operational": 2, 00:14:08.753 "base_bdevs_list": [ 00:14:08.753 { 00:14:08.753 "name": null, 00:14:08.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.753 "is_configured": false, 00:14:08.753 "data_offset": 2048, 00:14:08.753 "data_size": 63488 00:14:08.753 }, 00:14:08.753 { 00:14:08.753 "name": "pt2", 00:14:08.753 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:08.753 "is_configured": true, 00:14:08.753 "data_offset": 2048, 00:14:08.753 "data_size": 63488 00:14:08.753 }, 00:14:08.753 { 00:14:08.753 "name": null, 00:14:08.753 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:08.753 "is_configured": false, 00:14:08.753 "data_offset": 2048, 00:14:08.753 "data_size": 63488 00:14:08.753 } 00:14:08.753 ] 00:14:08.753 }' 00:14:08.753 12:34:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.753 12:34:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.012 12:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:09.012 12:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:09.013 12:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.013 12:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.272 12:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.272 12:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:09.272 12:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:09.272 12:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.272 12:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.272 [2024-11-19 12:34:14.318940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:09.272 [2024-11-19 12:34:14.319005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.272 [2024-11-19 12:34:14.319026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:09.272 [2024-11-19 12:34:14.319041] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.272 [2024-11-19 12:34:14.319484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.272 [2024-11-19 12:34:14.319511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:09.272 [2024-11-19 12:34:14.319587] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:09.272 [2024-11-19 12:34:14.319614] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:09.272 [2024-11-19 12:34:14.319713] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:14:09.272 [2024-11-19 12:34:14.319727] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:09.272 [2024-11-19 12:34:14.320015] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:09.272 [2024-11-19 12:34:14.320549] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:14:09.272 [2024-11-19 12:34:14.320563] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:14:09.272 [2024-11-19 12:34:14.320769] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.272 pt3 00:14:09.272 12:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.272 12:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:09.272 12:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.272 12:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.272 12:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:09.272 12:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.272 12:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:09.272 12:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.272 12:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.272 12:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.272 12:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.273 12:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.273 12:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.273 12:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.273 12:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.273 12:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.273 12:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.273 "name": "raid_bdev1", 00:14:09.273 "uuid": "a313f2ff-a809-481e-a14e-8322ad3e0d96", 00:14:09.273 "strip_size_kb": 64, 00:14:09.273 "state": "online", 00:14:09.273 "raid_level": "raid5f", 00:14:09.273 "superblock": true, 00:14:09.273 "num_base_bdevs": 3, 00:14:09.273 "num_base_bdevs_discovered": 2, 00:14:09.273 "num_base_bdevs_operational": 2, 00:14:09.273 "base_bdevs_list": [ 00:14:09.273 { 00:14:09.273 "name": null, 00:14:09.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.273 "is_configured": false, 00:14:09.273 "data_offset": 2048, 00:14:09.273 "data_size": 63488 00:14:09.273 }, 00:14:09.273 { 00:14:09.273 "name": "pt2", 00:14:09.273 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:09.273 "is_configured": true, 00:14:09.273 "data_offset": 2048, 00:14:09.273 "data_size": 63488 00:14:09.273 }, 00:14:09.273 { 00:14:09.273 "name": "pt3", 00:14:09.273 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:09.273 "is_configured": true, 00:14:09.273 "data_offset": 2048, 00:14:09.273 "data_size": 63488 00:14:09.273 } 00:14:09.273 ] 00:14:09.273 }' 00:14:09.273 12:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.273 12:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.533 12:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:09.533 12:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:09.533 12:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.533 12:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.792 12:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.792 12:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:09.792 12:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:09.792 12:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.792 12:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.792 12:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:09.792 [2024-11-19 12:34:14.835076] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:09.792 12:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.792 12:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a313f2ff-a809-481e-a14e-8322ad3e0d96 '!=' a313f2ff-a809-481e-a14e-8322ad3e0d96 ']' 00:14:09.792 12:34:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 91872 00:14:09.792 12:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 91872 ']' 00:14:09.792 12:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 91872 00:14:09.793 12:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:14:09.793 12:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:09.793 12:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91872 00:14:09.793 12:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:09.793 12:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:09.793 12:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91872' 00:14:09.793 killing process with pid 91872 00:14:09.793 12:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 91872 00:14:09.793 [2024-11-19 12:34:14.909548] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:09.793 [2024-11-19 12:34:14.909716] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:09.793 12:34:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 91872 00:14:09.793 [2024-11-19 12:34:14.909856] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:09.793 [2024-11-19 12:34:14.909873] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:14:09.793 [2024-11-19 12:34:14.972943] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:10.362 12:34:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:10.362 00:14:10.362 real 0m6.689s 00:14:10.362 user 0m10.847s 00:14:10.362 sys 0m1.556s 00:14:10.362 12:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:10.362 12:34:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.362 ************************************ 00:14:10.362 END TEST raid5f_superblock_test 00:14:10.362 ************************************ 00:14:10.362 12:34:15 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:10.362 12:34:15 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:14:10.362 12:34:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:10.362 12:34:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:10.362 12:34:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:10.363 ************************************ 00:14:10.363 START TEST raid5f_rebuild_test 00:14:10.363 ************************************ 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=92309 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 92309 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 92309 ']' 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:10.363 12:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.363 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:10.363 Zero copy mechanism will not be used. 00:14:10.363 [2024-11-19 12:34:15.543194] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:10.363 [2024-11-19 12:34:15.543304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92309 ] 00:14:10.623 [2024-11-19 12:34:15.702774] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.623 [2024-11-19 12:34:15.776214] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.623 [2024-11-19 12:34:15.852694] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:10.623 [2024-11-19 12:34:15.852742] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.193 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:11.193 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:14:11.193 12:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:11.193 12:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:11.193 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.193 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.193 BaseBdev1_malloc 00:14:11.193 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.193 12:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:11.193 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.193 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.193 [2024-11-19 12:34:16.395476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:11.193 [2024-11-19 12:34:16.395576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.193 [2024-11-19 12:34:16.395606] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:11.193 [2024-11-19 12:34:16.395624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.193 [2024-11-19 12:34:16.398184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.193 [2024-11-19 12:34:16.398323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:11.193 BaseBdev1 00:14:11.193 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.193 12:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:11.193 12:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:11.193 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.193 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.193 BaseBdev2_malloc 00:14:11.193 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.193 12:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:11.193 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.193 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.193 [2024-11-19 12:34:16.447355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:11.193 [2024-11-19 12:34:16.447615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.194 [2024-11-19 12:34:16.447678] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:11.194 [2024-11-19 12:34:16.447706] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.454 [2024-11-19 12:34:16.452661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.454 [2024-11-19 12:34:16.452725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:11.454 BaseBdev2 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.454 BaseBdev3_malloc 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.454 [2024-11-19 12:34:16.483891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:11.454 [2024-11-19 12:34:16.483951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.454 [2024-11-19 12:34:16.483979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:11.454 [2024-11-19 12:34:16.483990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.454 [2024-11-19 12:34:16.486410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.454 [2024-11-19 12:34:16.486451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:11.454 BaseBdev3 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.454 spare_malloc 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.454 spare_delay 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.454 [2024-11-19 12:34:16.530518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:11.454 [2024-11-19 12:34:16.530577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.454 [2024-11-19 12:34:16.530608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:11.454 [2024-11-19 12:34:16.530618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.454 [2024-11-19 12:34:16.533193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.454 [2024-11-19 12:34:16.533295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:11.454 spare 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.454 [2024-11-19 12:34:16.542573] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:11.454 [2024-11-19 12:34:16.544608] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:11.454 [2024-11-19 12:34:16.544773] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:11.454 [2024-11-19 12:34:16.544870] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:11.454 [2024-11-19 12:34:16.544883] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:11.454 [2024-11-19 12:34:16.545156] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:11.454 [2024-11-19 12:34:16.545608] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:11.454 [2024-11-19 12:34:16.545620] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:11.454 [2024-11-19 12:34:16.545799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.454 12:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.454 "name": "raid_bdev1", 00:14:11.454 "uuid": "3087ab90-c096-4c3e-9eac-789f4e624fd0", 00:14:11.454 "strip_size_kb": 64, 00:14:11.455 "state": "online", 00:14:11.455 "raid_level": "raid5f", 00:14:11.455 "superblock": false, 00:14:11.455 "num_base_bdevs": 3, 00:14:11.455 "num_base_bdevs_discovered": 3, 00:14:11.455 "num_base_bdevs_operational": 3, 00:14:11.455 "base_bdevs_list": [ 00:14:11.455 { 00:14:11.455 "name": "BaseBdev1", 00:14:11.455 "uuid": "aeb1d70e-2a6a-5ed1-9d58-51e9b88e8a38", 00:14:11.455 "is_configured": true, 00:14:11.455 "data_offset": 0, 00:14:11.455 "data_size": 65536 00:14:11.455 }, 00:14:11.455 { 00:14:11.455 "name": "BaseBdev2", 00:14:11.455 "uuid": "b6d02be9-319e-5900-876d-0796a6ae5e35", 00:14:11.455 "is_configured": true, 00:14:11.455 "data_offset": 0, 00:14:11.455 "data_size": 65536 00:14:11.455 }, 00:14:11.455 { 00:14:11.455 "name": "BaseBdev3", 00:14:11.455 "uuid": "c5b2ed8c-9451-5da3-abde-1c9d57bebdce", 00:14:11.455 "is_configured": true, 00:14:11.455 "data_offset": 0, 00:14:11.455 "data_size": 65536 00:14:11.455 } 00:14:11.455 ] 00:14:11.455 }' 00:14:11.455 12:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.455 12:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.024 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:12.024 12:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.024 12:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.024 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:12.024 [2024-11-19 12:34:17.015655] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:12.024 12:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.024 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:14:12.024 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:12.024 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.024 12:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.024 12:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.024 12:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.024 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:12.024 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:12.024 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:12.024 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:12.024 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:12.024 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:12.024 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:12.024 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:12.024 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:12.024 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:12.024 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:12.024 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:12.024 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:12.024 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:12.284 [2024-11-19 12:34:17.287125] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:12.284 /dev/nbd0 00:14:12.284 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:12.284 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:12.284 12:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:12.284 12:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:12.284 12:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:12.284 12:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:12.284 12:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:12.284 12:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:12.284 12:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:12.284 12:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:12.284 12:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:12.284 1+0 records in 00:14:12.284 1+0 records out 00:14:12.284 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344739 s, 11.9 MB/s 00:14:12.284 12:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.284 12:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:12.284 12:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.284 12:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:12.284 12:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:12.284 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:12.284 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:12.284 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:12.284 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:12.284 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:12.284 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:14:12.544 512+0 records in 00:14:12.544 512+0 records out 00:14:12.544 67108864 bytes (67 MB, 64 MiB) copied, 0.297674 s, 225 MB/s 00:14:12.544 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:12.544 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:12.544 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:12.544 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:12.544 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:12.544 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.544 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:12.804 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:12.804 [2024-11-19 12:34:17.861844] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.804 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:12.804 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:12.804 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.804 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.804 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:12.804 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:12.804 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.804 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:12.804 12:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.804 12:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.804 [2024-11-19 12:34:17.875421] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:12.804 12:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.804 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:12.804 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.804 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.804 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:12.804 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.804 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:12.804 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.804 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.804 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.804 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.804 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.804 12:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.804 12:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.804 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.804 12:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.804 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.804 "name": "raid_bdev1", 00:14:12.804 "uuid": "3087ab90-c096-4c3e-9eac-789f4e624fd0", 00:14:12.804 "strip_size_kb": 64, 00:14:12.804 "state": "online", 00:14:12.804 "raid_level": "raid5f", 00:14:12.804 "superblock": false, 00:14:12.804 "num_base_bdevs": 3, 00:14:12.804 "num_base_bdevs_discovered": 2, 00:14:12.804 "num_base_bdevs_operational": 2, 00:14:12.804 "base_bdevs_list": [ 00:14:12.804 { 00:14:12.804 "name": null, 00:14:12.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.804 "is_configured": false, 00:14:12.804 "data_offset": 0, 00:14:12.804 "data_size": 65536 00:14:12.804 }, 00:14:12.804 { 00:14:12.804 "name": "BaseBdev2", 00:14:12.804 "uuid": "b6d02be9-319e-5900-876d-0796a6ae5e35", 00:14:12.804 "is_configured": true, 00:14:12.804 "data_offset": 0, 00:14:12.804 "data_size": 65536 00:14:12.804 }, 00:14:12.804 { 00:14:12.804 "name": "BaseBdev3", 00:14:12.804 "uuid": "c5b2ed8c-9451-5da3-abde-1c9d57bebdce", 00:14:12.804 "is_configured": true, 00:14:12.804 "data_offset": 0, 00:14:12.804 "data_size": 65536 00:14:12.804 } 00:14:12.804 ] 00:14:12.804 }' 00:14:12.804 12:34:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.804 12:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.064 12:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:13.064 12:34:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.064 12:34:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.064 [2024-11-19 12:34:18.314774] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:13.064 [2024-11-19 12:34:18.318768] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b4e0 00:14:13.064 12:34:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.064 12:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:13.064 [2024-11-19 12:34:18.321016] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:14.444 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:14.444 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.444 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:14.444 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:14.444 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.444 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.444 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.444 12:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.444 12:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.444 12:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.444 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.444 "name": "raid_bdev1", 00:14:14.444 "uuid": "3087ab90-c096-4c3e-9eac-789f4e624fd0", 00:14:14.444 "strip_size_kb": 64, 00:14:14.444 "state": "online", 00:14:14.444 "raid_level": "raid5f", 00:14:14.444 "superblock": false, 00:14:14.444 "num_base_bdevs": 3, 00:14:14.444 "num_base_bdevs_discovered": 3, 00:14:14.444 "num_base_bdevs_operational": 3, 00:14:14.444 "process": { 00:14:14.444 "type": "rebuild", 00:14:14.444 "target": "spare", 00:14:14.444 "progress": { 00:14:14.444 "blocks": 20480, 00:14:14.444 "percent": 15 00:14:14.444 } 00:14:14.444 }, 00:14:14.444 "base_bdevs_list": [ 00:14:14.444 { 00:14:14.444 "name": "spare", 00:14:14.444 "uuid": "86013f31-ca4b-5bd2-bcb9-1f25c8c56b6f", 00:14:14.444 "is_configured": true, 00:14:14.444 "data_offset": 0, 00:14:14.444 "data_size": 65536 00:14:14.444 }, 00:14:14.444 { 00:14:14.444 "name": "BaseBdev2", 00:14:14.444 "uuid": "b6d02be9-319e-5900-876d-0796a6ae5e35", 00:14:14.444 "is_configured": true, 00:14:14.444 "data_offset": 0, 00:14:14.444 "data_size": 65536 00:14:14.444 }, 00:14:14.444 { 00:14:14.444 "name": "BaseBdev3", 00:14:14.444 "uuid": "c5b2ed8c-9451-5da3-abde-1c9d57bebdce", 00:14:14.444 "is_configured": true, 00:14:14.444 "data_offset": 0, 00:14:14.444 "data_size": 65536 00:14:14.444 } 00:14:14.444 ] 00:14:14.444 }' 00:14:14.444 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.444 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:14.444 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.444 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:14.444 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:14.444 12:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.444 12:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.444 [2024-11-19 12:34:19.461620] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:14.444 [2024-11-19 12:34:19.528885] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:14.444 [2024-11-19 12:34:19.528957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.444 [2024-11-19 12:34:19.528973] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:14.444 [2024-11-19 12:34:19.528983] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:14.444 12:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.444 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:14.444 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.444 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.444 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:14.444 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.444 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:14.444 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.445 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.445 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.445 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.445 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.445 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.445 12:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.445 12:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.445 12:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.445 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.445 "name": "raid_bdev1", 00:14:14.445 "uuid": "3087ab90-c096-4c3e-9eac-789f4e624fd0", 00:14:14.445 "strip_size_kb": 64, 00:14:14.445 "state": "online", 00:14:14.445 "raid_level": "raid5f", 00:14:14.445 "superblock": false, 00:14:14.445 "num_base_bdevs": 3, 00:14:14.445 "num_base_bdevs_discovered": 2, 00:14:14.445 "num_base_bdevs_operational": 2, 00:14:14.445 "base_bdevs_list": [ 00:14:14.445 { 00:14:14.445 "name": null, 00:14:14.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.445 "is_configured": false, 00:14:14.445 "data_offset": 0, 00:14:14.445 "data_size": 65536 00:14:14.445 }, 00:14:14.445 { 00:14:14.445 "name": "BaseBdev2", 00:14:14.445 "uuid": "b6d02be9-319e-5900-876d-0796a6ae5e35", 00:14:14.445 "is_configured": true, 00:14:14.445 "data_offset": 0, 00:14:14.445 "data_size": 65536 00:14:14.445 }, 00:14:14.445 { 00:14:14.445 "name": "BaseBdev3", 00:14:14.445 "uuid": "c5b2ed8c-9451-5da3-abde-1c9d57bebdce", 00:14:14.445 "is_configured": true, 00:14:14.445 "data_offset": 0, 00:14:14.445 "data_size": 65536 00:14:14.445 } 00:14:14.445 ] 00:14:14.445 }' 00:14:14.445 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.445 12:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.014 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:15.014 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.014 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:15.014 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:15.014 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.014 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.014 12:34:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.014 12:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.014 12:34:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.014 12:34:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.014 12:34:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.014 "name": "raid_bdev1", 00:14:15.014 "uuid": "3087ab90-c096-4c3e-9eac-789f4e624fd0", 00:14:15.014 "strip_size_kb": 64, 00:14:15.015 "state": "online", 00:14:15.015 "raid_level": "raid5f", 00:14:15.015 "superblock": false, 00:14:15.015 "num_base_bdevs": 3, 00:14:15.015 "num_base_bdevs_discovered": 2, 00:14:15.015 "num_base_bdevs_operational": 2, 00:14:15.015 "base_bdevs_list": [ 00:14:15.015 { 00:14:15.015 "name": null, 00:14:15.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.015 "is_configured": false, 00:14:15.015 "data_offset": 0, 00:14:15.015 "data_size": 65536 00:14:15.015 }, 00:14:15.015 { 00:14:15.015 "name": "BaseBdev2", 00:14:15.015 "uuid": "b6d02be9-319e-5900-876d-0796a6ae5e35", 00:14:15.015 "is_configured": true, 00:14:15.015 "data_offset": 0, 00:14:15.015 "data_size": 65536 00:14:15.015 }, 00:14:15.015 { 00:14:15.015 "name": "BaseBdev3", 00:14:15.015 "uuid": "c5b2ed8c-9451-5da3-abde-1c9d57bebdce", 00:14:15.015 "is_configured": true, 00:14:15.015 "data_offset": 0, 00:14:15.015 "data_size": 65536 00:14:15.015 } 00:14:15.015 ] 00:14:15.015 }' 00:14:15.015 12:34:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.015 12:34:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:15.015 12:34:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.015 12:34:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:15.015 12:34:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:15.015 12:34:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.015 12:34:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.015 [2024-11-19 12:34:20.121519] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:15.015 [2024-11-19 12:34:20.125195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:14:15.015 [2024-11-19 12:34:20.127276] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:15.015 12:34:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.015 12:34:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:15.952 12:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:15.953 12:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.953 12:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:15.953 12:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:15.953 12:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.953 12:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.953 12:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.953 12:34:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.953 12:34:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.953 12:34:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.953 12:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.953 "name": "raid_bdev1", 00:14:15.953 "uuid": "3087ab90-c096-4c3e-9eac-789f4e624fd0", 00:14:15.953 "strip_size_kb": 64, 00:14:15.953 "state": "online", 00:14:15.953 "raid_level": "raid5f", 00:14:15.953 "superblock": false, 00:14:15.953 "num_base_bdevs": 3, 00:14:15.953 "num_base_bdevs_discovered": 3, 00:14:15.953 "num_base_bdevs_operational": 3, 00:14:15.953 "process": { 00:14:15.953 "type": "rebuild", 00:14:15.953 "target": "spare", 00:14:15.953 "progress": { 00:14:15.953 "blocks": 20480, 00:14:15.953 "percent": 15 00:14:15.953 } 00:14:15.953 }, 00:14:15.953 "base_bdevs_list": [ 00:14:15.953 { 00:14:15.953 "name": "spare", 00:14:15.953 "uuid": "86013f31-ca4b-5bd2-bcb9-1f25c8c56b6f", 00:14:15.953 "is_configured": true, 00:14:15.953 "data_offset": 0, 00:14:15.953 "data_size": 65536 00:14:15.953 }, 00:14:15.953 { 00:14:15.953 "name": "BaseBdev2", 00:14:15.953 "uuid": "b6d02be9-319e-5900-876d-0796a6ae5e35", 00:14:15.953 "is_configured": true, 00:14:15.953 "data_offset": 0, 00:14:15.953 "data_size": 65536 00:14:15.953 }, 00:14:15.953 { 00:14:15.953 "name": "BaseBdev3", 00:14:15.953 "uuid": "c5b2ed8c-9451-5da3-abde-1c9d57bebdce", 00:14:15.953 "is_configured": true, 00:14:15.953 "data_offset": 0, 00:14:15.953 "data_size": 65536 00:14:15.953 } 00:14:15.953 ] 00:14:15.953 }' 00:14:15.953 12:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.212 12:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.212 12:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.212 12:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.212 12:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:16.212 12:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:16.212 12:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:16.212 12:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=454 00:14:16.212 12:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:16.212 12:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.212 12:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.212 12:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.212 12:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.212 12:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.212 12:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.212 12:34:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.212 12:34:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.212 12:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.212 12:34:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.212 12:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.212 "name": "raid_bdev1", 00:14:16.212 "uuid": "3087ab90-c096-4c3e-9eac-789f4e624fd0", 00:14:16.212 "strip_size_kb": 64, 00:14:16.212 "state": "online", 00:14:16.212 "raid_level": "raid5f", 00:14:16.212 "superblock": false, 00:14:16.212 "num_base_bdevs": 3, 00:14:16.212 "num_base_bdevs_discovered": 3, 00:14:16.212 "num_base_bdevs_operational": 3, 00:14:16.212 "process": { 00:14:16.212 "type": "rebuild", 00:14:16.212 "target": "spare", 00:14:16.212 "progress": { 00:14:16.212 "blocks": 22528, 00:14:16.212 "percent": 17 00:14:16.212 } 00:14:16.212 }, 00:14:16.212 "base_bdevs_list": [ 00:14:16.212 { 00:14:16.212 "name": "spare", 00:14:16.212 "uuid": "86013f31-ca4b-5bd2-bcb9-1f25c8c56b6f", 00:14:16.212 "is_configured": true, 00:14:16.212 "data_offset": 0, 00:14:16.212 "data_size": 65536 00:14:16.212 }, 00:14:16.212 { 00:14:16.212 "name": "BaseBdev2", 00:14:16.212 "uuid": "b6d02be9-319e-5900-876d-0796a6ae5e35", 00:14:16.212 "is_configured": true, 00:14:16.212 "data_offset": 0, 00:14:16.212 "data_size": 65536 00:14:16.212 }, 00:14:16.212 { 00:14:16.212 "name": "BaseBdev3", 00:14:16.212 "uuid": "c5b2ed8c-9451-5da3-abde-1c9d57bebdce", 00:14:16.212 "is_configured": true, 00:14:16.212 "data_offset": 0, 00:14:16.212 "data_size": 65536 00:14:16.212 } 00:14:16.212 ] 00:14:16.212 }' 00:14:16.212 12:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.212 12:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.212 12:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.212 12:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.212 12:34:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:17.149 12:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:17.150 12:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.410 12:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.410 12:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.410 12:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.410 12:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.410 12:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.410 12:34:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.410 12:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.410 12:34:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.410 12:34:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.410 12:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.410 "name": "raid_bdev1", 00:14:17.410 "uuid": "3087ab90-c096-4c3e-9eac-789f4e624fd0", 00:14:17.410 "strip_size_kb": 64, 00:14:17.410 "state": "online", 00:14:17.410 "raid_level": "raid5f", 00:14:17.410 "superblock": false, 00:14:17.410 "num_base_bdevs": 3, 00:14:17.410 "num_base_bdevs_discovered": 3, 00:14:17.410 "num_base_bdevs_operational": 3, 00:14:17.410 "process": { 00:14:17.410 "type": "rebuild", 00:14:17.410 "target": "spare", 00:14:17.410 "progress": { 00:14:17.410 "blocks": 45056, 00:14:17.410 "percent": 34 00:14:17.410 } 00:14:17.410 }, 00:14:17.410 "base_bdevs_list": [ 00:14:17.410 { 00:14:17.410 "name": "spare", 00:14:17.410 "uuid": "86013f31-ca4b-5bd2-bcb9-1f25c8c56b6f", 00:14:17.410 "is_configured": true, 00:14:17.410 "data_offset": 0, 00:14:17.410 "data_size": 65536 00:14:17.410 }, 00:14:17.410 { 00:14:17.410 "name": "BaseBdev2", 00:14:17.410 "uuid": "b6d02be9-319e-5900-876d-0796a6ae5e35", 00:14:17.410 "is_configured": true, 00:14:17.410 "data_offset": 0, 00:14:17.410 "data_size": 65536 00:14:17.410 }, 00:14:17.410 { 00:14:17.410 "name": "BaseBdev3", 00:14:17.410 "uuid": "c5b2ed8c-9451-5da3-abde-1c9d57bebdce", 00:14:17.410 "is_configured": true, 00:14:17.410 "data_offset": 0, 00:14:17.410 "data_size": 65536 00:14:17.410 } 00:14:17.410 ] 00:14:17.410 }' 00:14:17.410 12:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.410 12:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.410 12:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.410 12:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:17.410 12:34:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:18.349 12:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:18.349 12:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.349 12:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.349 12:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.349 12:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.349 12:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.349 12:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.349 12:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.349 12:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.349 12:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.349 12:34:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.610 12:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.610 "name": "raid_bdev1", 00:14:18.610 "uuid": "3087ab90-c096-4c3e-9eac-789f4e624fd0", 00:14:18.610 "strip_size_kb": 64, 00:14:18.610 "state": "online", 00:14:18.610 "raid_level": "raid5f", 00:14:18.610 "superblock": false, 00:14:18.610 "num_base_bdevs": 3, 00:14:18.610 "num_base_bdevs_discovered": 3, 00:14:18.610 "num_base_bdevs_operational": 3, 00:14:18.610 "process": { 00:14:18.610 "type": "rebuild", 00:14:18.610 "target": "spare", 00:14:18.610 "progress": { 00:14:18.610 "blocks": 69632, 00:14:18.610 "percent": 53 00:14:18.610 } 00:14:18.610 }, 00:14:18.610 "base_bdevs_list": [ 00:14:18.610 { 00:14:18.610 "name": "spare", 00:14:18.610 "uuid": "86013f31-ca4b-5bd2-bcb9-1f25c8c56b6f", 00:14:18.610 "is_configured": true, 00:14:18.610 "data_offset": 0, 00:14:18.610 "data_size": 65536 00:14:18.610 }, 00:14:18.610 { 00:14:18.610 "name": "BaseBdev2", 00:14:18.610 "uuid": "b6d02be9-319e-5900-876d-0796a6ae5e35", 00:14:18.610 "is_configured": true, 00:14:18.610 "data_offset": 0, 00:14:18.610 "data_size": 65536 00:14:18.610 }, 00:14:18.610 { 00:14:18.610 "name": "BaseBdev3", 00:14:18.610 "uuid": "c5b2ed8c-9451-5da3-abde-1c9d57bebdce", 00:14:18.610 "is_configured": true, 00:14:18.610 "data_offset": 0, 00:14:18.610 "data_size": 65536 00:14:18.610 } 00:14:18.610 ] 00:14:18.610 }' 00:14:18.610 12:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.610 12:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:18.610 12:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.610 12:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:18.610 12:34:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:19.553 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:19.553 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.553 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.553 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.553 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.553 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.553 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.553 12:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.553 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.553 12:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.553 12:34:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.553 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.553 "name": "raid_bdev1", 00:14:19.553 "uuid": "3087ab90-c096-4c3e-9eac-789f4e624fd0", 00:14:19.553 "strip_size_kb": 64, 00:14:19.553 "state": "online", 00:14:19.553 "raid_level": "raid5f", 00:14:19.553 "superblock": false, 00:14:19.553 "num_base_bdevs": 3, 00:14:19.553 "num_base_bdevs_discovered": 3, 00:14:19.553 "num_base_bdevs_operational": 3, 00:14:19.553 "process": { 00:14:19.553 "type": "rebuild", 00:14:19.553 "target": "spare", 00:14:19.553 "progress": { 00:14:19.553 "blocks": 92160, 00:14:19.553 "percent": 70 00:14:19.553 } 00:14:19.553 }, 00:14:19.553 "base_bdevs_list": [ 00:14:19.553 { 00:14:19.553 "name": "spare", 00:14:19.553 "uuid": "86013f31-ca4b-5bd2-bcb9-1f25c8c56b6f", 00:14:19.553 "is_configured": true, 00:14:19.553 "data_offset": 0, 00:14:19.553 "data_size": 65536 00:14:19.553 }, 00:14:19.553 { 00:14:19.553 "name": "BaseBdev2", 00:14:19.553 "uuid": "b6d02be9-319e-5900-876d-0796a6ae5e35", 00:14:19.553 "is_configured": true, 00:14:19.553 "data_offset": 0, 00:14:19.553 "data_size": 65536 00:14:19.553 }, 00:14:19.553 { 00:14:19.553 "name": "BaseBdev3", 00:14:19.553 "uuid": "c5b2ed8c-9451-5da3-abde-1c9d57bebdce", 00:14:19.553 "is_configured": true, 00:14:19.553 "data_offset": 0, 00:14:19.553 "data_size": 65536 00:14:19.553 } 00:14:19.553 ] 00:14:19.553 }' 00:14:19.553 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.553 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.553 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.851 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.851 12:34:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:20.809 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:20.809 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.809 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.809 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.809 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.809 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.809 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.809 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.809 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.809 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.809 12:34:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.809 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.809 "name": "raid_bdev1", 00:14:20.809 "uuid": "3087ab90-c096-4c3e-9eac-789f4e624fd0", 00:14:20.809 "strip_size_kb": 64, 00:14:20.809 "state": "online", 00:14:20.809 "raid_level": "raid5f", 00:14:20.809 "superblock": false, 00:14:20.809 "num_base_bdevs": 3, 00:14:20.809 "num_base_bdevs_discovered": 3, 00:14:20.809 "num_base_bdevs_operational": 3, 00:14:20.809 "process": { 00:14:20.809 "type": "rebuild", 00:14:20.809 "target": "spare", 00:14:20.809 "progress": { 00:14:20.809 "blocks": 114688, 00:14:20.809 "percent": 87 00:14:20.809 } 00:14:20.809 }, 00:14:20.809 "base_bdevs_list": [ 00:14:20.809 { 00:14:20.809 "name": "spare", 00:14:20.809 "uuid": "86013f31-ca4b-5bd2-bcb9-1f25c8c56b6f", 00:14:20.809 "is_configured": true, 00:14:20.809 "data_offset": 0, 00:14:20.809 "data_size": 65536 00:14:20.809 }, 00:14:20.809 { 00:14:20.809 "name": "BaseBdev2", 00:14:20.809 "uuid": "b6d02be9-319e-5900-876d-0796a6ae5e35", 00:14:20.809 "is_configured": true, 00:14:20.809 "data_offset": 0, 00:14:20.809 "data_size": 65536 00:14:20.809 }, 00:14:20.809 { 00:14:20.809 "name": "BaseBdev3", 00:14:20.809 "uuid": "c5b2ed8c-9451-5da3-abde-1c9d57bebdce", 00:14:20.809 "is_configured": true, 00:14:20.809 "data_offset": 0, 00:14:20.809 "data_size": 65536 00:14:20.809 } 00:14:20.809 ] 00:14:20.809 }' 00:14:20.809 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.809 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.809 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.809 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.809 12:34:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:21.378 [2024-11-19 12:34:26.571783] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:21.378 [2024-11-19 12:34:26.571858] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:21.378 [2024-11-19 12:34:26.571908] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.947 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:21.947 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:21.947 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.947 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:21.947 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:21.947 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.947 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.947 12:34:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.947 12:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.947 12:34:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.947 12:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.947 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.947 "name": "raid_bdev1", 00:14:21.947 "uuid": "3087ab90-c096-4c3e-9eac-789f4e624fd0", 00:14:21.947 "strip_size_kb": 64, 00:14:21.947 "state": "online", 00:14:21.947 "raid_level": "raid5f", 00:14:21.947 "superblock": false, 00:14:21.947 "num_base_bdevs": 3, 00:14:21.947 "num_base_bdevs_discovered": 3, 00:14:21.947 "num_base_bdevs_operational": 3, 00:14:21.947 "base_bdevs_list": [ 00:14:21.947 { 00:14:21.947 "name": "spare", 00:14:21.947 "uuid": "86013f31-ca4b-5bd2-bcb9-1f25c8c56b6f", 00:14:21.947 "is_configured": true, 00:14:21.947 "data_offset": 0, 00:14:21.947 "data_size": 65536 00:14:21.947 }, 00:14:21.947 { 00:14:21.947 "name": "BaseBdev2", 00:14:21.947 "uuid": "b6d02be9-319e-5900-876d-0796a6ae5e35", 00:14:21.947 "is_configured": true, 00:14:21.947 "data_offset": 0, 00:14:21.948 "data_size": 65536 00:14:21.948 }, 00:14:21.948 { 00:14:21.948 "name": "BaseBdev3", 00:14:21.948 "uuid": "c5b2ed8c-9451-5da3-abde-1c9d57bebdce", 00:14:21.948 "is_configured": true, 00:14:21.948 "data_offset": 0, 00:14:21.948 "data_size": 65536 00:14:21.948 } 00:14:21.948 ] 00:14:21.948 }' 00:14:21.948 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.948 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:21.948 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.948 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:21.948 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:21.948 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:21.948 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.948 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:21.948 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:21.948 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.948 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.948 12:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.948 12:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.948 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.948 12:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.948 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.948 "name": "raid_bdev1", 00:14:21.948 "uuid": "3087ab90-c096-4c3e-9eac-789f4e624fd0", 00:14:21.948 "strip_size_kb": 64, 00:14:21.948 "state": "online", 00:14:21.948 "raid_level": "raid5f", 00:14:21.948 "superblock": false, 00:14:21.948 "num_base_bdevs": 3, 00:14:21.948 "num_base_bdevs_discovered": 3, 00:14:21.948 "num_base_bdevs_operational": 3, 00:14:21.948 "base_bdevs_list": [ 00:14:21.948 { 00:14:21.948 "name": "spare", 00:14:21.948 "uuid": "86013f31-ca4b-5bd2-bcb9-1f25c8c56b6f", 00:14:21.948 "is_configured": true, 00:14:21.948 "data_offset": 0, 00:14:21.948 "data_size": 65536 00:14:21.948 }, 00:14:21.948 { 00:14:21.948 "name": "BaseBdev2", 00:14:21.948 "uuid": "b6d02be9-319e-5900-876d-0796a6ae5e35", 00:14:21.948 "is_configured": true, 00:14:21.948 "data_offset": 0, 00:14:21.948 "data_size": 65536 00:14:21.948 }, 00:14:21.948 { 00:14:21.948 "name": "BaseBdev3", 00:14:21.948 "uuid": "c5b2ed8c-9451-5da3-abde-1c9d57bebdce", 00:14:21.948 "is_configured": true, 00:14:21.948 "data_offset": 0, 00:14:21.948 "data_size": 65536 00:14:21.948 } 00:14:21.948 ] 00:14:21.948 }' 00:14:21.948 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.207 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:22.207 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.207 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:22.207 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:22.207 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:22.207 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.207 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:22.207 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.207 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:22.207 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.207 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.207 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.207 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.207 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.207 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.207 12:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.207 12:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.207 12:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.207 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.207 "name": "raid_bdev1", 00:14:22.207 "uuid": "3087ab90-c096-4c3e-9eac-789f4e624fd0", 00:14:22.207 "strip_size_kb": 64, 00:14:22.208 "state": "online", 00:14:22.208 "raid_level": "raid5f", 00:14:22.208 "superblock": false, 00:14:22.208 "num_base_bdevs": 3, 00:14:22.208 "num_base_bdevs_discovered": 3, 00:14:22.208 "num_base_bdevs_operational": 3, 00:14:22.208 "base_bdevs_list": [ 00:14:22.208 { 00:14:22.208 "name": "spare", 00:14:22.208 "uuid": "86013f31-ca4b-5bd2-bcb9-1f25c8c56b6f", 00:14:22.208 "is_configured": true, 00:14:22.208 "data_offset": 0, 00:14:22.208 "data_size": 65536 00:14:22.208 }, 00:14:22.208 { 00:14:22.208 "name": "BaseBdev2", 00:14:22.208 "uuid": "b6d02be9-319e-5900-876d-0796a6ae5e35", 00:14:22.208 "is_configured": true, 00:14:22.208 "data_offset": 0, 00:14:22.208 "data_size": 65536 00:14:22.208 }, 00:14:22.208 { 00:14:22.208 "name": "BaseBdev3", 00:14:22.208 "uuid": "c5b2ed8c-9451-5da3-abde-1c9d57bebdce", 00:14:22.208 "is_configured": true, 00:14:22.208 "data_offset": 0, 00:14:22.208 "data_size": 65536 00:14:22.208 } 00:14:22.208 ] 00:14:22.208 }' 00:14:22.208 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.208 12:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.466 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:22.466 12:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.466 12:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.724 [2024-11-19 12:34:27.727167] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:22.724 [2024-11-19 12:34:27.727205] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:22.724 [2024-11-19 12:34:27.727305] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:22.724 [2024-11-19 12:34:27.727392] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:22.724 [2024-11-19 12:34:27.727409] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:22.724 12:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.724 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.724 12:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.724 12:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.724 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:22.724 12:34:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.724 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:22.724 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:22.724 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:22.724 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:22.724 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:22.724 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:22.724 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:22.724 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:22.724 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:22.724 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:22.724 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:22.724 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:22.724 12:34:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:22.724 /dev/nbd0 00:14:22.983 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:22.983 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:22.983 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:22.983 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:22.983 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:22.983 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:22.983 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:22.983 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:22.983 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:22.983 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:22.983 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:22.983 1+0 records in 00:14:22.983 1+0 records out 00:14:22.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358714 s, 11.4 MB/s 00:14:22.983 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.983 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:22.983 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.983 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:22.983 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:22.983 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:22.983 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:22.983 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:22.983 /dev/nbd1 00:14:23.242 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:23.242 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:23.242 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:23.242 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:23.242 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:23.242 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:23.242 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:23.242 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:23.242 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:23.242 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:23.242 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:23.242 1+0 records in 00:14:23.242 1+0 records out 00:14:23.242 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379827 s, 10.8 MB/s 00:14:23.242 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.242 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:23.242 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.242 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:23.242 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:23.242 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:23.242 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:23.242 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:23.242 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:23.242 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:23.242 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:23.242 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:23.242 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:23.242 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:23.242 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:23.501 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:23.501 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:23.501 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:23.501 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:23.501 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:23.501 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:23.501 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:23.501 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:23.501 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:23.501 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:23.760 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:23.760 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:23.760 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:23.760 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:23.760 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:23.760 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:23.760 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:23.760 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:23.760 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:23.760 12:34:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 92309 00:14:23.761 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 92309 ']' 00:14:23.761 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 92309 00:14:23.761 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:14:23.761 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:23.761 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92309 00:14:23.761 killing process with pid 92309 00:14:23.761 Received shutdown signal, test time was about 60.000000 seconds 00:14:23.761 00:14:23.761 Latency(us) 00:14:23.761 [2024-11-19T12:34:29.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.761 [2024-11-19T12:34:29.022Z] =================================================================================================================== 00:14:23.761 [2024-11-19T12:34:29.022Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:23.761 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:23.761 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:23.761 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92309' 00:14:23.761 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 92309 00:14:23.761 [2024-11-19 12:34:28.868204] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:23.761 12:34:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 92309 00:14:23.761 [2024-11-19 12:34:28.909553] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:24.019 12:34:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:24.019 00:14:24.020 real 0m13.699s 00:14:24.020 user 0m17.010s 00:14:24.020 sys 0m2.101s 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.020 ************************************ 00:14:24.020 END TEST raid5f_rebuild_test 00:14:24.020 ************************************ 00:14:24.020 12:34:29 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:14:24.020 12:34:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:24.020 12:34:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:24.020 12:34:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:24.020 ************************************ 00:14:24.020 START TEST raid5f_rebuild_test_sb 00:14:24.020 ************************************ 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=92730 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 92730 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 92730 ']' 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:24.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:24.020 12:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.278 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:24.278 Zero copy mechanism will not be used. 00:14:24.278 [2024-11-19 12:34:29.315859] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:24.278 [2024-11-19 12:34:29.316000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92730 ] 00:14:24.278 [2024-11-19 12:34:29.475855] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.278 [2024-11-19 12:34:29.530125] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.537 [2024-11-19 12:34:29.573313] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:24.537 [2024-11-19 12:34:29.573356] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.107 BaseBdev1_malloc 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.107 [2024-11-19 12:34:30.208155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:25.107 [2024-11-19 12:34:30.208232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.107 [2024-11-19 12:34:30.208262] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:25.107 [2024-11-19 12:34:30.208279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.107 [2024-11-19 12:34:30.210645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.107 [2024-11-19 12:34:30.210685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:25.107 BaseBdev1 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.107 BaseBdev2_malloc 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.107 [2024-11-19 12:34:30.248098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:25.107 [2024-11-19 12:34:30.248171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.107 [2024-11-19 12:34:30.248196] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:25.107 [2024-11-19 12:34:30.248205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.107 [2024-11-19 12:34:30.250497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.107 [2024-11-19 12:34:30.250543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:25.107 BaseBdev2 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.107 BaseBdev3_malloc 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.107 [2024-11-19 12:34:30.277104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:25.107 [2024-11-19 12:34:30.277171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.107 [2024-11-19 12:34:30.277201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:25.107 [2024-11-19 12:34:30.277211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.107 [2024-11-19 12:34:30.279498] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.107 [2024-11-19 12:34:30.279539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:25.107 BaseBdev3 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.107 spare_malloc 00:14:25.107 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.108 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:25.108 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.108 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.108 spare_delay 00:14:25.108 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.108 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:25.108 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.108 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.108 [2024-11-19 12:34:30.317972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:25.108 [2024-11-19 12:34:30.318028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.108 [2024-11-19 12:34:30.318055] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:25.108 [2024-11-19 12:34:30.318064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.108 [2024-11-19 12:34:30.320482] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.108 [2024-11-19 12:34:30.320521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:25.108 spare 00:14:25.108 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.108 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:25.108 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.108 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.108 [2024-11-19 12:34:30.330028] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:25.108 [2024-11-19 12:34:30.331898] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:25.108 [2024-11-19 12:34:30.331971] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:25.108 [2024-11-19 12:34:30.332145] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:25.108 [2024-11-19 12:34:30.332160] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:25.108 [2024-11-19 12:34:30.332440] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:25.108 [2024-11-19 12:34:30.332876] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:25.108 [2024-11-19 12:34:30.332895] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:25.108 [2024-11-19 12:34:30.333046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.108 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.108 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:25.108 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.108 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.108 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.108 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.108 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.108 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.108 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.108 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.108 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.108 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.108 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.108 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.108 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.108 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.368 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.368 "name": "raid_bdev1", 00:14:25.368 "uuid": "9f980c08-d22c-404e-b6e1-e568eb7ab5e5", 00:14:25.368 "strip_size_kb": 64, 00:14:25.368 "state": "online", 00:14:25.368 "raid_level": "raid5f", 00:14:25.368 "superblock": true, 00:14:25.368 "num_base_bdevs": 3, 00:14:25.368 "num_base_bdevs_discovered": 3, 00:14:25.368 "num_base_bdevs_operational": 3, 00:14:25.368 "base_bdevs_list": [ 00:14:25.368 { 00:14:25.368 "name": "BaseBdev1", 00:14:25.368 "uuid": "19c3bd37-accd-5f2b-88db-448674c68bc6", 00:14:25.368 "is_configured": true, 00:14:25.368 "data_offset": 2048, 00:14:25.368 "data_size": 63488 00:14:25.368 }, 00:14:25.368 { 00:14:25.368 "name": "BaseBdev2", 00:14:25.368 "uuid": "d87e73f2-19a1-5a24-bb97-c3217d19b711", 00:14:25.368 "is_configured": true, 00:14:25.368 "data_offset": 2048, 00:14:25.368 "data_size": 63488 00:14:25.368 }, 00:14:25.368 { 00:14:25.368 "name": "BaseBdev3", 00:14:25.368 "uuid": "13c6a762-3008-5dc2-8653-71597b83b1ae", 00:14:25.368 "is_configured": true, 00:14:25.368 "data_offset": 2048, 00:14:25.368 "data_size": 63488 00:14:25.368 } 00:14:25.368 ] 00:14:25.368 }' 00:14:25.368 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.368 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.629 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:25.629 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.629 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.629 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:25.629 [2024-11-19 12:34:30.769880] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:25.629 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.629 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:14:25.629 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:25.629 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.629 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.629 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.629 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.629 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:25.629 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:25.629 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:25.629 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:25.629 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:25.629 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:25.629 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:25.629 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:25.629 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:25.629 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:25.629 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:25.630 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:25.630 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:25.630 12:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:25.890 [2024-11-19 12:34:31.025273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:25.890 /dev/nbd0 00:14:25.890 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:25.890 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:25.890 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:25.890 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:25.890 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:25.890 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:25.890 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:25.890 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:25.890 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:25.890 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:25.890 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:25.890 1+0 records in 00:14:25.890 1+0 records out 00:14:25.890 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310374 s, 13.2 MB/s 00:14:25.890 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.890 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:25.890 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.890 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:25.890 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:25.890 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:25.890 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:25.890 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:25.890 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:25.890 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:25.890 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:14:26.461 496+0 records in 00:14:26.461 496+0 records out 00:14:26.461 65011712 bytes (65 MB, 62 MiB) copied, 0.304383 s, 214 MB/s 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:26.461 [2024-11-19 12:34:31.630640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.461 [2024-11-19 12:34:31.662679] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.461 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.721 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.721 "name": "raid_bdev1", 00:14:26.721 "uuid": "9f980c08-d22c-404e-b6e1-e568eb7ab5e5", 00:14:26.721 "strip_size_kb": 64, 00:14:26.721 "state": "online", 00:14:26.721 "raid_level": "raid5f", 00:14:26.721 "superblock": true, 00:14:26.721 "num_base_bdevs": 3, 00:14:26.721 "num_base_bdevs_discovered": 2, 00:14:26.721 "num_base_bdevs_operational": 2, 00:14:26.721 "base_bdevs_list": [ 00:14:26.721 { 00:14:26.721 "name": null, 00:14:26.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.721 "is_configured": false, 00:14:26.721 "data_offset": 0, 00:14:26.721 "data_size": 63488 00:14:26.721 }, 00:14:26.721 { 00:14:26.721 "name": "BaseBdev2", 00:14:26.721 "uuid": "d87e73f2-19a1-5a24-bb97-c3217d19b711", 00:14:26.721 "is_configured": true, 00:14:26.721 "data_offset": 2048, 00:14:26.721 "data_size": 63488 00:14:26.721 }, 00:14:26.721 { 00:14:26.721 "name": "BaseBdev3", 00:14:26.721 "uuid": "13c6a762-3008-5dc2-8653-71597b83b1ae", 00:14:26.721 "is_configured": true, 00:14:26.721 "data_offset": 2048, 00:14:26.721 "data_size": 63488 00:14:26.721 } 00:14:26.721 ] 00:14:26.721 }' 00:14:26.721 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.721 12:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.981 12:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:26.981 12:34:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.981 12:34:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.981 [2024-11-19 12:34:32.109980] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:26.981 [2024-11-19 12:34:32.114028] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028de0 00:14:26.981 12:34:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.981 12:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:26.981 [2024-11-19 12:34:32.116396] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:27.922 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.922 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.922 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.922 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.922 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.922 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.922 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.922 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.922 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.922 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.922 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.922 "name": "raid_bdev1", 00:14:27.922 "uuid": "9f980c08-d22c-404e-b6e1-e568eb7ab5e5", 00:14:27.922 "strip_size_kb": 64, 00:14:27.922 "state": "online", 00:14:27.922 "raid_level": "raid5f", 00:14:27.922 "superblock": true, 00:14:27.922 "num_base_bdevs": 3, 00:14:27.922 "num_base_bdevs_discovered": 3, 00:14:27.922 "num_base_bdevs_operational": 3, 00:14:27.922 "process": { 00:14:27.922 "type": "rebuild", 00:14:27.922 "target": "spare", 00:14:27.922 "progress": { 00:14:27.922 "blocks": 20480, 00:14:27.922 "percent": 16 00:14:27.922 } 00:14:27.922 }, 00:14:27.922 "base_bdevs_list": [ 00:14:27.922 { 00:14:27.922 "name": "spare", 00:14:27.922 "uuid": "e62ea5fc-2a5c-5fcd-8ea6-f370aeccc0b8", 00:14:27.922 "is_configured": true, 00:14:27.922 "data_offset": 2048, 00:14:27.922 "data_size": 63488 00:14:27.922 }, 00:14:27.922 { 00:14:27.922 "name": "BaseBdev2", 00:14:27.922 "uuid": "d87e73f2-19a1-5a24-bb97-c3217d19b711", 00:14:27.922 "is_configured": true, 00:14:27.922 "data_offset": 2048, 00:14:27.922 "data_size": 63488 00:14:27.922 }, 00:14:27.922 { 00:14:27.922 "name": "BaseBdev3", 00:14:27.922 "uuid": "13c6a762-3008-5dc2-8653-71597b83b1ae", 00:14:27.922 "is_configured": true, 00:14:27.922 "data_offset": 2048, 00:14:27.922 "data_size": 63488 00:14:27.922 } 00:14:27.922 ] 00:14:27.922 }' 00:14:27.922 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.183 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.183 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.183 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.183 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:28.183 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.183 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.183 [2024-11-19 12:34:33.273507] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:28.183 [2024-11-19 12:34:33.327808] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:28.183 [2024-11-19 12:34:33.328307] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.183 [2024-11-19 12:34:33.328343] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:28.183 [2024-11-19 12:34:33.328360] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:28.183 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.183 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:28.183 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.183 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.183 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:28.183 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.183 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:28.183 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.183 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.183 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.183 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.183 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.183 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.183 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.183 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.183 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.183 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.183 "name": "raid_bdev1", 00:14:28.183 "uuid": "9f980c08-d22c-404e-b6e1-e568eb7ab5e5", 00:14:28.183 "strip_size_kb": 64, 00:14:28.183 "state": "online", 00:14:28.183 "raid_level": "raid5f", 00:14:28.183 "superblock": true, 00:14:28.183 "num_base_bdevs": 3, 00:14:28.183 "num_base_bdevs_discovered": 2, 00:14:28.183 "num_base_bdevs_operational": 2, 00:14:28.183 "base_bdevs_list": [ 00:14:28.183 { 00:14:28.183 "name": null, 00:14:28.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.183 "is_configured": false, 00:14:28.183 "data_offset": 0, 00:14:28.183 "data_size": 63488 00:14:28.183 }, 00:14:28.183 { 00:14:28.183 "name": "BaseBdev2", 00:14:28.183 "uuid": "d87e73f2-19a1-5a24-bb97-c3217d19b711", 00:14:28.183 "is_configured": true, 00:14:28.183 "data_offset": 2048, 00:14:28.183 "data_size": 63488 00:14:28.183 }, 00:14:28.183 { 00:14:28.183 "name": "BaseBdev3", 00:14:28.183 "uuid": "13c6a762-3008-5dc2-8653-71597b83b1ae", 00:14:28.183 "is_configured": true, 00:14:28.183 "data_offset": 2048, 00:14:28.183 "data_size": 63488 00:14:28.183 } 00:14:28.183 ] 00:14:28.183 }' 00:14:28.183 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.183 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.753 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:28.753 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.753 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:28.753 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:28.754 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.754 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.754 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.754 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.754 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.754 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.754 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.754 "name": "raid_bdev1", 00:14:28.754 "uuid": "9f980c08-d22c-404e-b6e1-e568eb7ab5e5", 00:14:28.754 "strip_size_kb": 64, 00:14:28.754 "state": "online", 00:14:28.754 "raid_level": "raid5f", 00:14:28.754 "superblock": true, 00:14:28.754 "num_base_bdevs": 3, 00:14:28.754 "num_base_bdevs_discovered": 2, 00:14:28.754 "num_base_bdevs_operational": 2, 00:14:28.754 "base_bdevs_list": [ 00:14:28.754 { 00:14:28.754 "name": null, 00:14:28.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.754 "is_configured": false, 00:14:28.754 "data_offset": 0, 00:14:28.754 "data_size": 63488 00:14:28.754 }, 00:14:28.754 { 00:14:28.754 "name": "BaseBdev2", 00:14:28.754 "uuid": "d87e73f2-19a1-5a24-bb97-c3217d19b711", 00:14:28.754 "is_configured": true, 00:14:28.754 "data_offset": 2048, 00:14:28.754 "data_size": 63488 00:14:28.754 }, 00:14:28.754 { 00:14:28.754 "name": "BaseBdev3", 00:14:28.754 "uuid": "13c6a762-3008-5dc2-8653-71597b83b1ae", 00:14:28.754 "is_configured": true, 00:14:28.754 "data_offset": 2048, 00:14:28.754 "data_size": 63488 00:14:28.754 } 00:14:28.754 ] 00:14:28.754 }' 00:14:28.754 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.754 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:28.754 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.754 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:28.754 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:28.754 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.754 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.754 [2024-11-19 12:34:33.961133] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:28.754 [2024-11-19 12:34:33.964996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028eb0 00:14:28.754 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.754 12:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:28.754 [2024-11-19 12:34:33.967249] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:30.136 12:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:30.136 12:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.136 12:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:30.136 12:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:30.136 12:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.136 12:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.136 12:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.136 12:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.136 12:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.136 12:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.136 12:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.136 "name": "raid_bdev1", 00:14:30.136 "uuid": "9f980c08-d22c-404e-b6e1-e568eb7ab5e5", 00:14:30.136 "strip_size_kb": 64, 00:14:30.136 "state": "online", 00:14:30.136 "raid_level": "raid5f", 00:14:30.136 "superblock": true, 00:14:30.136 "num_base_bdevs": 3, 00:14:30.136 "num_base_bdevs_discovered": 3, 00:14:30.136 "num_base_bdevs_operational": 3, 00:14:30.136 "process": { 00:14:30.136 "type": "rebuild", 00:14:30.136 "target": "spare", 00:14:30.136 "progress": { 00:14:30.136 "blocks": 20480, 00:14:30.136 "percent": 16 00:14:30.136 } 00:14:30.136 }, 00:14:30.136 "base_bdevs_list": [ 00:14:30.136 { 00:14:30.136 "name": "spare", 00:14:30.136 "uuid": "e62ea5fc-2a5c-5fcd-8ea6-f370aeccc0b8", 00:14:30.136 "is_configured": true, 00:14:30.136 "data_offset": 2048, 00:14:30.136 "data_size": 63488 00:14:30.136 }, 00:14:30.136 { 00:14:30.136 "name": "BaseBdev2", 00:14:30.136 "uuid": "d87e73f2-19a1-5a24-bb97-c3217d19b711", 00:14:30.136 "is_configured": true, 00:14:30.136 "data_offset": 2048, 00:14:30.136 "data_size": 63488 00:14:30.136 }, 00:14:30.136 { 00:14:30.136 "name": "BaseBdev3", 00:14:30.136 "uuid": "13c6a762-3008-5dc2-8653-71597b83b1ae", 00:14:30.136 "is_configured": true, 00:14:30.136 "data_offset": 2048, 00:14:30.136 "data_size": 63488 00:14:30.136 } 00:14:30.136 ] 00:14:30.136 }' 00:14:30.136 12:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.136 12:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:30.136 12:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.136 12:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:30.136 12:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:30.136 12:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:30.136 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:30.136 12:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:30.136 12:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:30.136 12:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=468 00:14:30.136 12:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:30.136 12:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:30.136 12:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.136 12:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:30.136 12:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:30.136 12:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.136 12:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.136 12:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.136 12:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.136 12:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.136 12:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.136 12:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.136 "name": "raid_bdev1", 00:14:30.136 "uuid": "9f980c08-d22c-404e-b6e1-e568eb7ab5e5", 00:14:30.136 "strip_size_kb": 64, 00:14:30.136 "state": "online", 00:14:30.136 "raid_level": "raid5f", 00:14:30.136 "superblock": true, 00:14:30.136 "num_base_bdevs": 3, 00:14:30.136 "num_base_bdevs_discovered": 3, 00:14:30.136 "num_base_bdevs_operational": 3, 00:14:30.136 "process": { 00:14:30.136 "type": "rebuild", 00:14:30.136 "target": "spare", 00:14:30.136 "progress": { 00:14:30.136 "blocks": 22528, 00:14:30.136 "percent": 17 00:14:30.136 } 00:14:30.136 }, 00:14:30.136 "base_bdevs_list": [ 00:14:30.136 { 00:14:30.136 "name": "spare", 00:14:30.136 "uuid": "e62ea5fc-2a5c-5fcd-8ea6-f370aeccc0b8", 00:14:30.136 "is_configured": true, 00:14:30.136 "data_offset": 2048, 00:14:30.136 "data_size": 63488 00:14:30.136 }, 00:14:30.136 { 00:14:30.136 "name": "BaseBdev2", 00:14:30.136 "uuid": "d87e73f2-19a1-5a24-bb97-c3217d19b711", 00:14:30.136 "is_configured": true, 00:14:30.136 "data_offset": 2048, 00:14:30.136 "data_size": 63488 00:14:30.137 }, 00:14:30.137 { 00:14:30.137 "name": "BaseBdev3", 00:14:30.137 "uuid": "13c6a762-3008-5dc2-8653-71597b83b1ae", 00:14:30.137 "is_configured": true, 00:14:30.137 "data_offset": 2048, 00:14:30.137 "data_size": 63488 00:14:30.137 } 00:14:30.137 ] 00:14:30.137 }' 00:14:30.137 12:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.137 12:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:30.137 12:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.137 12:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:30.137 12:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:31.080 12:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:31.080 12:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.080 12:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.080 12:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.080 12:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.080 12:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.080 12:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.080 12:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.080 12:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.080 12:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.080 12:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.080 12:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.080 "name": "raid_bdev1", 00:14:31.080 "uuid": "9f980c08-d22c-404e-b6e1-e568eb7ab5e5", 00:14:31.080 "strip_size_kb": 64, 00:14:31.080 "state": "online", 00:14:31.080 "raid_level": "raid5f", 00:14:31.080 "superblock": true, 00:14:31.080 "num_base_bdevs": 3, 00:14:31.080 "num_base_bdevs_discovered": 3, 00:14:31.080 "num_base_bdevs_operational": 3, 00:14:31.080 "process": { 00:14:31.080 "type": "rebuild", 00:14:31.080 "target": "spare", 00:14:31.080 "progress": { 00:14:31.080 "blocks": 45056, 00:14:31.080 "percent": 35 00:14:31.080 } 00:14:31.080 }, 00:14:31.080 "base_bdevs_list": [ 00:14:31.080 { 00:14:31.080 "name": "spare", 00:14:31.080 "uuid": "e62ea5fc-2a5c-5fcd-8ea6-f370aeccc0b8", 00:14:31.080 "is_configured": true, 00:14:31.080 "data_offset": 2048, 00:14:31.080 "data_size": 63488 00:14:31.080 }, 00:14:31.080 { 00:14:31.080 "name": "BaseBdev2", 00:14:31.080 "uuid": "d87e73f2-19a1-5a24-bb97-c3217d19b711", 00:14:31.080 "is_configured": true, 00:14:31.080 "data_offset": 2048, 00:14:31.080 "data_size": 63488 00:14:31.080 }, 00:14:31.080 { 00:14:31.080 "name": "BaseBdev3", 00:14:31.080 "uuid": "13c6a762-3008-5dc2-8653-71597b83b1ae", 00:14:31.080 "is_configured": true, 00:14:31.080 "data_offset": 2048, 00:14:31.080 "data_size": 63488 00:14:31.080 } 00:14:31.080 ] 00:14:31.080 }' 00:14:31.080 12:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.350 12:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:31.350 12:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.350 12:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.350 12:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:32.289 12:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:32.289 12:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:32.289 12:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.289 12:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:32.289 12:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:32.289 12:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.289 12:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.289 12:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.289 12:34:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.289 12:34:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.289 12:34:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.289 12:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.289 "name": "raid_bdev1", 00:14:32.289 "uuid": "9f980c08-d22c-404e-b6e1-e568eb7ab5e5", 00:14:32.289 "strip_size_kb": 64, 00:14:32.289 "state": "online", 00:14:32.289 "raid_level": "raid5f", 00:14:32.289 "superblock": true, 00:14:32.289 "num_base_bdevs": 3, 00:14:32.289 "num_base_bdevs_discovered": 3, 00:14:32.289 "num_base_bdevs_operational": 3, 00:14:32.289 "process": { 00:14:32.289 "type": "rebuild", 00:14:32.289 "target": "spare", 00:14:32.289 "progress": { 00:14:32.289 "blocks": 69632, 00:14:32.289 "percent": 54 00:14:32.289 } 00:14:32.289 }, 00:14:32.289 "base_bdevs_list": [ 00:14:32.289 { 00:14:32.289 "name": "spare", 00:14:32.289 "uuid": "e62ea5fc-2a5c-5fcd-8ea6-f370aeccc0b8", 00:14:32.289 "is_configured": true, 00:14:32.289 "data_offset": 2048, 00:14:32.289 "data_size": 63488 00:14:32.289 }, 00:14:32.289 { 00:14:32.289 "name": "BaseBdev2", 00:14:32.289 "uuid": "d87e73f2-19a1-5a24-bb97-c3217d19b711", 00:14:32.289 "is_configured": true, 00:14:32.289 "data_offset": 2048, 00:14:32.289 "data_size": 63488 00:14:32.289 }, 00:14:32.289 { 00:14:32.289 "name": "BaseBdev3", 00:14:32.289 "uuid": "13c6a762-3008-5dc2-8653-71597b83b1ae", 00:14:32.289 "is_configured": true, 00:14:32.289 "data_offset": 2048, 00:14:32.289 "data_size": 63488 00:14:32.289 } 00:14:32.289 ] 00:14:32.289 }' 00:14:32.289 12:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.289 12:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:32.289 12:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.289 12:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:32.289 12:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:33.670 12:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:33.670 12:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:33.670 12:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.670 12:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:33.670 12:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:33.670 12:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.670 12:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.670 12:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.670 12:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.670 12:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.670 12:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.670 12:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.670 "name": "raid_bdev1", 00:14:33.670 "uuid": "9f980c08-d22c-404e-b6e1-e568eb7ab5e5", 00:14:33.670 "strip_size_kb": 64, 00:14:33.670 "state": "online", 00:14:33.670 "raid_level": "raid5f", 00:14:33.670 "superblock": true, 00:14:33.670 "num_base_bdevs": 3, 00:14:33.670 "num_base_bdevs_discovered": 3, 00:14:33.670 "num_base_bdevs_operational": 3, 00:14:33.670 "process": { 00:14:33.670 "type": "rebuild", 00:14:33.670 "target": "spare", 00:14:33.670 "progress": { 00:14:33.670 "blocks": 92160, 00:14:33.670 "percent": 72 00:14:33.670 } 00:14:33.670 }, 00:14:33.670 "base_bdevs_list": [ 00:14:33.670 { 00:14:33.670 "name": "spare", 00:14:33.670 "uuid": "e62ea5fc-2a5c-5fcd-8ea6-f370aeccc0b8", 00:14:33.670 "is_configured": true, 00:14:33.670 "data_offset": 2048, 00:14:33.670 "data_size": 63488 00:14:33.670 }, 00:14:33.670 { 00:14:33.670 "name": "BaseBdev2", 00:14:33.670 "uuid": "d87e73f2-19a1-5a24-bb97-c3217d19b711", 00:14:33.670 "is_configured": true, 00:14:33.670 "data_offset": 2048, 00:14:33.670 "data_size": 63488 00:14:33.670 }, 00:14:33.670 { 00:14:33.670 "name": "BaseBdev3", 00:14:33.670 "uuid": "13c6a762-3008-5dc2-8653-71597b83b1ae", 00:14:33.670 "is_configured": true, 00:14:33.670 "data_offset": 2048, 00:14:33.670 "data_size": 63488 00:14:33.670 } 00:14:33.670 ] 00:14:33.670 }' 00:14:33.670 12:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.670 12:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:33.670 12:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.670 12:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:33.670 12:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:34.612 12:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:34.612 12:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:34.612 12:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.612 12:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:34.612 12:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:34.612 12:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.612 12:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.612 12:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.612 12:34:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.612 12:34:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.612 12:34:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.612 12:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.612 "name": "raid_bdev1", 00:14:34.612 "uuid": "9f980c08-d22c-404e-b6e1-e568eb7ab5e5", 00:14:34.612 "strip_size_kb": 64, 00:14:34.612 "state": "online", 00:14:34.612 "raid_level": "raid5f", 00:14:34.612 "superblock": true, 00:14:34.612 "num_base_bdevs": 3, 00:14:34.612 "num_base_bdevs_discovered": 3, 00:14:34.612 "num_base_bdevs_operational": 3, 00:14:34.612 "process": { 00:14:34.612 "type": "rebuild", 00:14:34.612 "target": "spare", 00:14:34.612 "progress": { 00:14:34.612 "blocks": 116736, 00:14:34.612 "percent": 91 00:14:34.612 } 00:14:34.612 }, 00:14:34.612 "base_bdevs_list": [ 00:14:34.612 { 00:14:34.612 "name": "spare", 00:14:34.612 "uuid": "e62ea5fc-2a5c-5fcd-8ea6-f370aeccc0b8", 00:14:34.612 "is_configured": true, 00:14:34.612 "data_offset": 2048, 00:14:34.612 "data_size": 63488 00:14:34.612 }, 00:14:34.612 { 00:14:34.612 "name": "BaseBdev2", 00:14:34.612 "uuid": "d87e73f2-19a1-5a24-bb97-c3217d19b711", 00:14:34.612 "is_configured": true, 00:14:34.612 "data_offset": 2048, 00:14:34.612 "data_size": 63488 00:14:34.612 }, 00:14:34.612 { 00:14:34.612 "name": "BaseBdev3", 00:14:34.612 "uuid": "13c6a762-3008-5dc2-8653-71597b83b1ae", 00:14:34.612 "is_configured": true, 00:14:34.612 "data_offset": 2048, 00:14:34.612 "data_size": 63488 00:14:34.612 } 00:14:34.612 ] 00:14:34.612 }' 00:14:34.612 12:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.612 12:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:34.612 12:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.612 12:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:34.612 12:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:35.182 [2024-11-19 12:34:40.215895] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:35.182 [2024-11-19 12:34:40.215975] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:35.182 [2024-11-19 12:34:40.216084] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.752 12:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:35.752 12:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.752 12:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.752 12:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.752 12:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.752 12:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.752 12:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.752 12:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.752 12:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.752 12:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.752 12:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.752 12:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.752 "name": "raid_bdev1", 00:14:35.752 "uuid": "9f980c08-d22c-404e-b6e1-e568eb7ab5e5", 00:14:35.752 "strip_size_kb": 64, 00:14:35.752 "state": "online", 00:14:35.752 "raid_level": "raid5f", 00:14:35.752 "superblock": true, 00:14:35.752 "num_base_bdevs": 3, 00:14:35.752 "num_base_bdevs_discovered": 3, 00:14:35.752 "num_base_bdevs_operational": 3, 00:14:35.752 "base_bdevs_list": [ 00:14:35.752 { 00:14:35.752 "name": "spare", 00:14:35.752 "uuid": "e62ea5fc-2a5c-5fcd-8ea6-f370aeccc0b8", 00:14:35.752 "is_configured": true, 00:14:35.752 "data_offset": 2048, 00:14:35.752 "data_size": 63488 00:14:35.752 }, 00:14:35.752 { 00:14:35.752 "name": "BaseBdev2", 00:14:35.752 "uuid": "d87e73f2-19a1-5a24-bb97-c3217d19b711", 00:14:35.752 "is_configured": true, 00:14:35.752 "data_offset": 2048, 00:14:35.752 "data_size": 63488 00:14:35.752 }, 00:14:35.752 { 00:14:35.752 "name": "BaseBdev3", 00:14:35.752 "uuid": "13c6a762-3008-5dc2-8653-71597b83b1ae", 00:14:35.752 "is_configured": true, 00:14:35.752 "data_offset": 2048, 00:14:35.752 "data_size": 63488 00:14:35.752 } 00:14:35.752 ] 00:14:35.752 }' 00:14:35.752 12:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.753 12:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:35.753 12:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.014 "name": "raid_bdev1", 00:14:36.014 "uuid": "9f980c08-d22c-404e-b6e1-e568eb7ab5e5", 00:14:36.014 "strip_size_kb": 64, 00:14:36.014 "state": "online", 00:14:36.014 "raid_level": "raid5f", 00:14:36.014 "superblock": true, 00:14:36.014 "num_base_bdevs": 3, 00:14:36.014 "num_base_bdevs_discovered": 3, 00:14:36.014 "num_base_bdevs_operational": 3, 00:14:36.014 "base_bdevs_list": [ 00:14:36.014 { 00:14:36.014 "name": "spare", 00:14:36.014 "uuid": "e62ea5fc-2a5c-5fcd-8ea6-f370aeccc0b8", 00:14:36.014 "is_configured": true, 00:14:36.014 "data_offset": 2048, 00:14:36.014 "data_size": 63488 00:14:36.014 }, 00:14:36.014 { 00:14:36.014 "name": "BaseBdev2", 00:14:36.014 "uuid": "d87e73f2-19a1-5a24-bb97-c3217d19b711", 00:14:36.014 "is_configured": true, 00:14:36.014 "data_offset": 2048, 00:14:36.014 "data_size": 63488 00:14:36.014 }, 00:14:36.014 { 00:14:36.014 "name": "BaseBdev3", 00:14:36.014 "uuid": "13c6a762-3008-5dc2-8653-71597b83b1ae", 00:14:36.014 "is_configured": true, 00:14:36.014 "data_offset": 2048, 00:14:36.014 "data_size": 63488 00:14:36.014 } 00:14:36.014 ] 00:14:36.014 }' 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.014 "name": "raid_bdev1", 00:14:36.014 "uuid": "9f980c08-d22c-404e-b6e1-e568eb7ab5e5", 00:14:36.014 "strip_size_kb": 64, 00:14:36.014 "state": "online", 00:14:36.014 "raid_level": "raid5f", 00:14:36.014 "superblock": true, 00:14:36.014 "num_base_bdevs": 3, 00:14:36.014 "num_base_bdevs_discovered": 3, 00:14:36.014 "num_base_bdevs_operational": 3, 00:14:36.014 "base_bdevs_list": [ 00:14:36.014 { 00:14:36.014 "name": "spare", 00:14:36.014 "uuid": "e62ea5fc-2a5c-5fcd-8ea6-f370aeccc0b8", 00:14:36.014 "is_configured": true, 00:14:36.014 "data_offset": 2048, 00:14:36.014 "data_size": 63488 00:14:36.014 }, 00:14:36.014 { 00:14:36.014 "name": "BaseBdev2", 00:14:36.014 "uuid": "d87e73f2-19a1-5a24-bb97-c3217d19b711", 00:14:36.014 "is_configured": true, 00:14:36.014 "data_offset": 2048, 00:14:36.014 "data_size": 63488 00:14:36.014 }, 00:14:36.014 { 00:14:36.014 "name": "BaseBdev3", 00:14:36.014 "uuid": "13c6a762-3008-5dc2-8653-71597b83b1ae", 00:14:36.014 "is_configured": true, 00:14:36.014 "data_offset": 2048, 00:14:36.014 "data_size": 63488 00:14:36.014 } 00:14:36.014 ] 00:14:36.014 }' 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.014 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.584 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:36.584 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.584 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.584 [2024-11-19 12:34:41.638870] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:36.584 [2024-11-19 12:34:41.638921] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:36.584 [2024-11-19 12:34:41.639021] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:36.584 [2024-11-19 12:34:41.639113] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:36.584 [2024-11-19 12:34:41.639133] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:36.584 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.584 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.584 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:36.584 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.584 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.584 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.584 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:36.584 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:36.584 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:36.584 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:36.584 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:36.584 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:36.584 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:36.584 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:36.584 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:36.584 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:36.584 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:36.584 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:36.584 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:36.843 /dev/nbd0 00:14:36.843 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:36.843 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:36.843 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:36.843 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:36.843 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:36.843 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:36.843 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:36.843 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:36.843 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:36.843 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:36.843 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:36.843 1+0 records in 00:14:36.843 1+0 records out 00:14:36.843 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408858 s, 10.0 MB/s 00:14:36.843 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.843 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:36.843 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.843 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:36.843 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:36.843 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:36.843 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:36.843 12:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:37.103 /dev/nbd1 00:14:37.103 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:37.103 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:37.103 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:37.103 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:37.103 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:37.103 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:37.103 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:37.103 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:37.103 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:37.103 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:37.103 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:37.103 1+0 records in 00:14:37.103 1+0 records out 00:14:37.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306967 s, 13.3 MB/s 00:14:37.103 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.103 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:37.103 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.103 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:37.103 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:37.103 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:37.103 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:37.103 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:37.103 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:37.103 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:37.103 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:37.103 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:37.103 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:37.103 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:37.103 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:37.364 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:37.364 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:37.364 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:37.364 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:37.364 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:37.364 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:37.364 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:37.364 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:37.364 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:37.364 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:37.624 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:37.624 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:37.624 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:37.624 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:37.624 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:37.624 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:37.624 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:37.624 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:37.624 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:37.624 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:37.624 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.624 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.624 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.624 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:37.624 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.624 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.624 [2024-11-19 12:34:42.818951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:37.624 [2024-11-19 12:34:42.819050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.624 [2024-11-19 12:34:42.819075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:37.624 [2024-11-19 12:34:42.819085] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.624 [2024-11-19 12:34:42.821352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.624 [2024-11-19 12:34:42.821397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:37.624 [2024-11-19 12:34:42.821493] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:37.624 [2024-11-19 12:34:42.821535] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:37.624 [2024-11-19 12:34:42.821642] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:37.624 [2024-11-19 12:34:42.821731] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:37.624 spare 00:14:37.624 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.624 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:37.624 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.624 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.884 [2024-11-19 12:34:42.921684] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:14:37.884 [2024-11-19 12:34:42.921752] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:37.884 [2024-11-19 12:34:42.922118] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047560 00:14:37.884 [2024-11-19 12:34:42.922603] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:14:37.884 [2024-11-19 12:34:42.922627] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:14:37.884 [2024-11-19 12:34:42.922874] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.884 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.884 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:37.884 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.884 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.884 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.884 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.884 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.884 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.884 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.884 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.884 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.884 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.884 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.884 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.884 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.884 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.884 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.884 "name": "raid_bdev1", 00:14:37.884 "uuid": "9f980c08-d22c-404e-b6e1-e568eb7ab5e5", 00:14:37.884 "strip_size_kb": 64, 00:14:37.884 "state": "online", 00:14:37.884 "raid_level": "raid5f", 00:14:37.884 "superblock": true, 00:14:37.884 "num_base_bdevs": 3, 00:14:37.884 "num_base_bdevs_discovered": 3, 00:14:37.884 "num_base_bdevs_operational": 3, 00:14:37.884 "base_bdevs_list": [ 00:14:37.884 { 00:14:37.884 "name": "spare", 00:14:37.884 "uuid": "e62ea5fc-2a5c-5fcd-8ea6-f370aeccc0b8", 00:14:37.884 "is_configured": true, 00:14:37.884 "data_offset": 2048, 00:14:37.884 "data_size": 63488 00:14:37.884 }, 00:14:37.884 { 00:14:37.884 "name": "BaseBdev2", 00:14:37.884 "uuid": "d87e73f2-19a1-5a24-bb97-c3217d19b711", 00:14:37.884 "is_configured": true, 00:14:37.884 "data_offset": 2048, 00:14:37.884 "data_size": 63488 00:14:37.884 }, 00:14:37.884 { 00:14:37.884 "name": "BaseBdev3", 00:14:37.884 "uuid": "13c6a762-3008-5dc2-8653-71597b83b1ae", 00:14:37.884 "is_configured": true, 00:14:37.884 "data_offset": 2048, 00:14:37.884 "data_size": 63488 00:14:37.884 } 00:14:37.884 ] 00:14:37.884 }' 00:14:37.884 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.884 12:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.144 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:38.144 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.144 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:38.144 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:38.144 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.144 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.144 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.144 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.144 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.144 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.144 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.144 "name": "raid_bdev1", 00:14:38.144 "uuid": "9f980c08-d22c-404e-b6e1-e568eb7ab5e5", 00:14:38.144 "strip_size_kb": 64, 00:14:38.144 "state": "online", 00:14:38.144 "raid_level": "raid5f", 00:14:38.144 "superblock": true, 00:14:38.144 "num_base_bdevs": 3, 00:14:38.144 "num_base_bdevs_discovered": 3, 00:14:38.144 "num_base_bdevs_operational": 3, 00:14:38.144 "base_bdevs_list": [ 00:14:38.144 { 00:14:38.144 "name": "spare", 00:14:38.144 "uuid": "e62ea5fc-2a5c-5fcd-8ea6-f370aeccc0b8", 00:14:38.144 "is_configured": true, 00:14:38.144 "data_offset": 2048, 00:14:38.144 "data_size": 63488 00:14:38.144 }, 00:14:38.144 { 00:14:38.144 "name": "BaseBdev2", 00:14:38.144 "uuid": "d87e73f2-19a1-5a24-bb97-c3217d19b711", 00:14:38.144 "is_configured": true, 00:14:38.144 "data_offset": 2048, 00:14:38.144 "data_size": 63488 00:14:38.144 }, 00:14:38.144 { 00:14:38.144 "name": "BaseBdev3", 00:14:38.144 "uuid": "13c6a762-3008-5dc2-8653-71597b83b1ae", 00:14:38.144 "is_configured": true, 00:14:38.144 "data_offset": 2048, 00:14:38.144 "data_size": 63488 00:14:38.145 } 00:14:38.145 ] 00:14:38.145 }' 00:14:38.145 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.405 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:38.405 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.405 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:38.405 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.405 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.405 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.405 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:38.405 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.405 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:38.405 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:38.405 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.405 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.405 [2024-11-19 12:34:43.546380] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:38.405 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.405 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:38.405 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.405 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.405 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.405 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.405 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:38.405 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.405 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.405 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.405 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.405 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.405 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.405 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.405 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.405 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.405 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.405 "name": "raid_bdev1", 00:14:38.405 "uuid": "9f980c08-d22c-404e-b6e1-e568eb7ab5e5", 00:14:38.405 "strip_size_kb": 64, 00:14:38.405 "state": "online", 00:14:38.405 "raid_level": "raid5f", 00:14:38.405 "superblock": true, 00:14:38.405 "num_base_bdevs": 3, 00:14:38.405 "num_base_bdevs_discovered": 2, 00:14:38.405 "num_base_bdevs_operational": 2, 00:14:38.405 "base_bdevs_list": [ 00:14:38.405 { 00:14:38.405 "name": null, 00:14:38.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.405 "is_configured": false, 00:14:38.405 "data_offset": 0, 00:14:38.405 "data_size": 63488 00:14:38.405 }, 00:14:38.405 { 00:14:38.405 "name": "BaseBdev2", 00:14:38.405 "uuid": "d87e73f2-19a1-5a24-bb97-c3217d19b711", 00:14:38.405 "is_configured": true, 00:14:38.405 "data_offset": 2048, 00:14:38.405 "data_size": 63488 00:14:38.405 }, 00:14:38.405 { 00:14:38.405 "name": "BaseBdev3", 00:14:38.405 "uuid": "13c6a762-3008-5dc2-8653-71597b83b1ae", 00:14:38.405 "is_configured": true, 00:14:38.405 "data_offset": 2048, 00:14:38.405 "data_size": 63488 00:14:38.405 } 00:14:38.405 ] 00:14:38.405 }' 00:14:38.405 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.405 12:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.976 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:38.976 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.976 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.976 [2024-11-19 12:34:44.029549] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:38.976 [2024-11-19 12:34:44.029784] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:38.976 [2024-11-19 12:34:44.029807] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:38.976 [2024-11-19 12:34:44.029852] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:38.976 [2024-11-19 12:34:44.033497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047630 00:14:38.976 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.976 12:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:38.976 [2024-11-19 12:34:44.035713] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:39.916 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.916 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.916 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.916 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.916 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.916 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.916 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.916 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.916 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.916 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.916 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.916 "name": "raid_bdev1", 00:14:39.916 "uuid": "9f980c08-d22c-404e-b6e1-e568eb7ab5e5", 00:14:39.916 "strip_size_kb": 64, 00:14:39.917 "state": "online", 00:14:39.917 "raid_level": "raid5f", 00:14:39.917 "superblock": true, 00:14:39.917 "num_base_bdevs": 3, 00:14:39.917 "num_base_bdevs_discovered": 3, 00:14:39.917 "num_base_bdevs_operational": 3, 00:14:39.917 "process": { 00:14:39.917 "type": "rebuild", 00:14:39.917 "target": "spare", 00:14:39.917 "progress": { 00:14:39.917 "blocks": 20480, 00:14:39.917 "percent": 16 00:14:39.917 } 00:14:39.917 }, 00:14:39.917 "base_bdevs_list": [ 00:14:39.917 { 00:14:39.917 "name": "spare", 00:14:39.917 "uuid": "e62ea5fc-2a5c-5fcd-8ea6-f370aeccc0b8", 00:14:39.917 "is_configured": true, 00:14:39.917 "data_offset": 2048, 00:14:39.917 "data_size": 63488 00:14:39.917 }, 00:14:39.917 { 00:14:39.917 "name": "BaseBdev2", 00:14:39.917 "uuid": "d87e73f2-19a1-5a24-bb97-c3217d19b711", 00:14:39.917 "is_configured": true, 00:14:39.917 "data_offset": 2048, 00:14:39.917 "data_size": 63488 00:14:39.917 }, 00:14:39.917 { 00:14:39.917 "name": "BaseBdev3", 00:14:39.917 "uuid": "13c6a762-3008-5dc2-8653-71597b83b1ae", 00:14:39.917 "is_configured": true, 00:14:39.917 "data_offset": 2048, 00:14:39.917 "data_size": 63488 00:14:39.917 } 00:14:39.917 ] 00:14:39.917 }' 00:14:39.917 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.917 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.917 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.917 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.917 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:39.917 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.917 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.177 [2024-11-19 12:34:45.181007] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:40.177 [2024-11-19 12:34:45.245715] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:40.177 [2024-11-19 12:34:45.245814] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.177 [2024-11-19 12:34:45.245836] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:40.177 [2024-11-19 12:34:45.245843] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:40.177 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.177 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:40.177 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.177 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.177 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.177 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.177 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:40.177 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.177 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.177 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.177 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.177 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.177 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.177 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.177 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.177 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.177 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.177 "name": "raid_bdev1", 00:14:40.177 "uuid": "9f980c08-d22c-404e-b6e1-e568eb7ab5e5", 00:14:40.177 "strip_size_kb": 64, 00:14:40.177 "state": "online", 00:14:40.177 "raid_level": "raid5f", 00:14:40.177 "superblock": true, 00:14:40.177 "num_base_bdevs": 3, 00:14:40.177 "num_base_bdevs_discovered": 2, 00:14:40.177 "num_base_bdevs_operational": 2, 00:14:40.177 "base_bdevs_list": [ 00:14:40.177 { 00:14:40.177 "name": null, 00:14:40.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.177 "is_configured": false, 00:14:40.177 "data_offset": 0, 00:14:40.177 "data_size": 63488 00:14:40.177 }, 00:14:40.177 { 00:14:40.177 "name": "BaseBdev2", 00:14:40.177 "uuid": "d87e73f2-19a1-5a24-bb97-c3217d19b711", 00:14:40.177 "is_configured": true, 00:14:40.177 "data_offset": 2048, 00:14:40.177 "data_size": 63488 00:14:40.177 }, 00:14:40.177 { 00:14:40.177 "name": "BaseBdev3", 00:14:40.177 "uuid": "13c6a762-3008-5dc2-8653-71597b83b1ae", 00:14:40.177 "is_configured": true, 00:14:40.177 "data_offset": 2048, 00:14:40.177 "data_size": 63488 00:14:40.177 } 00:14:40.177 ] 00:14:40.177 }' 00:14:40.177 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.177 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.438 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:40.438 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.438 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.698 [2024-11-19 12:34:45.698521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:40.698 [2024-11-19 12:34:45.698606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.698 [2024-11-19 12:34:45.698632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:14:40.698 [2024-11-19 12:34:45.698643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.698 [2024-11-19 12:34:45.699181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.698 [2024-11-19 12:34:45.699215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:40.698 [2024-11-19 12:34:45.699333] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:40.698 [2024-11-19 12:34:45.699358] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:40.698 [2024-11-19 12:34:45.699372] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:40.698 [2024-11-19 12:34:45.699403] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:40.698 [2024-11-19 12:34:45.703282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:14:40.699 spare 00:14:40.699 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.699 12:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:40.699 [2024-11-19 12:34:45.705907] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:41.639 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.639 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.639 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.639 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.639 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.639 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.639 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.639 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.639 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.639 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.639 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.639 "name": "raid_bdev1", 00:14:41.639 "uuid": "9f980c08-d22c-404e-b6e1-e568eb7ab5e5", 00:14:41.639 "strip_size_kb": 64, 00:14:41.639 "state": "online", 00:14:41.639 "raid_level": "raid5f", 00:14:41.639 "superblock": true, 00:14:41.639 "num_base_bdevs": 3, 00:14:41.639 "num_base_bdevs_discovered": 3, 00:14:41.639 "num_base_bdevs_operational": 3, 00:14:41.639 "process": { 00:14:41.639 "type": "rebuild", 00:14:41.639 "target": "spare", 00:14:41.639 "progress": { 00:14:41.639 "blocks": 20480, 00:14:41.639 "percent": 16 00:14:41.639 } 00:14:41.639 }, 00:14:41.639 "base_bdevs_list": [ 00:14:41.639 { 00:14:41.639 "name": "spare", 00:14:41.639 "uuid": "e62ea5fc-2a5c-5fcd-8ea6-f370aeccc0b8", 00:14:41.639 "is_configured": true, 00:14:41.639 "data_offset": 2048, 00:14:41.639 "data_size": 63488 00:14:41.639 }, 00:14:41.639 { 00:14:41.639 "name": "BaseBdev2", 00:14:41.639 "uuid": "d87e73f2-19a1-5a24-bb97-c3217d19b711", 00:14:41.639 "is_configured": true, 00:14:41.639 "data_offset": 2048, 00:14:41.639 "data_size": 63488 00:14:41.639 }, 00:14:41.639 { 00:14:41.639 "name": "BaseBdev3", 00:14:41.639 "uuid": "13c6a762-3008-5dc2-8653-71597b83b1ae", 00:14:41.639 "is_configured": true, 00:14:41.639 "data_offset": 2048, 00:14:41.639 "data_size": 63488 00:14:41.639 } 00:14:41.639 ] 00:14:41.639 }' 00:14:41.639 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.639 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.639 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.639 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.639 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:41.639 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.639 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.639 [2024-11-19 12:34:46.849934] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:41.912 [2024-11-19 12:34:46.915955] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:41.912 [2024-11-19 12:34:46.916068] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.912 [2024-11-19 12:34:46.916090] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:41.912 [2024-11-19 12:34:46.916104] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:41.912 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.912 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:41.912 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.912 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.912 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.912 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.912 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:41.912 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.912 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.912 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.912 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.912 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.912 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.912 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.912 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.912 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.912 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.912 "name": "raid_bdev1", 00:14:41.912 "uuid": "9f980c08-d22c-404e-b6e1-e568eb7ab5e5", 00:14:41.912 "strip_size_kb": 64, 00:14:41.912 "state": "online", 00:14:41.912 "raid_level": "raid5f", 00:14:41.912 "superblock": true, 00:14:41.912 "num_base_bdevs": 3, 00:14:41.912 "num_base_bdevs_discovered": 2, 00:14:41.912 "num_base_bdevs_operational": 2, 00:14:41.912 "base_bdevs_list": [ 00:14:41.912 { 00:14:41.912 "name": null, 00:14:41.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.912 "is_configured": false, 00:14:41.912 "data_offset": 0, 00:14:41.912 "data_size": 63488 00:14:41.912 }, 00:14:41.912 { 00:14:41.912 "name": "BaseBdev2", 00:14:41.912 "uuid": "d87e73f2-19a1-5a24-bb97-c3217d19b711", 00:14:41.912 "is_configured": true, 00:14:41.912 "data_offset": 2048, 00:14:41.912 "data_size": 63488 00:14:41.912 }, 00:14:41.912 { 00:14:41.912 "name": "BaseBdev3", 00:14:41.912 "uuid": "13c6a762-3008-5dc2-8653-71597b83b1ae", 00:14:41.912 "is_configured": true, 00:14:41.912 "data_offset": 2048, 00:14:41.912 "data_size": 63488 00:14:41.912 } 00:14:41.912 ] 00:14:41.912 }' 00:14:41.912 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.912 12:34:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.183 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:42.183 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.183 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:42.183 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:42.183 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.183 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.184 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.184 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.184 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.184 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.184 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.184 "name": "raid_bdev1", 00:14:42.184 "uuid": "9f980c08-d22c-404e-b6e1-e568eb7ab5e5", 00:14:42.184 "strip_size_kb": 64, 00:14:42.184 "state": "online", 00:14:42.184 "raid_level": "raid5f", 00:14:42.184 "superblock": true, 00:14:42.184 "num_base_bdevs": 3, 00:14:42.184 "num_base_bdevs_discovered": 2, 00:14:42.184 "num_base_bdevs_operational": 2, 00:14:42.184 "base_bdevs_list": [ 00:14:42.184 { 00:14:42.184 "name": null, 00:14:42.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.184 "is_configured": false, 00:14:42.184 "data_offset": 0, 00:14:42.184 "data_size": 63488 00:14:42.184 }, 00:14:42.184 { 00:14:42.184 "name": "BaseBdev2", 00:14:42.184 "uuid": "d87e73f2-19a1-5a24-bb97-c3217d19b711", 00:14:42.184 "is_configured": true, 00:14:42.184 "data_offset": 2048, 00:14:42.184 "data_size": 63488 00:14:42.184 }, 00:14:42.184 { 00:14:42.184 "name": "BaseBdev3", 00:14:42.184 "uuid": "13c6a762-3008-5dc2-8653-71597b83b1ae", 00:14:42.184 "is_configured": true, 00:14:42.184 "data_offset": 2048, 00:14:42.184 "data_size": 63488 00:14:42.184 } 00:14:42.184 ] 00:14:42.184 }' 00:14:42.184 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.444 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:42.444 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.444 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:42.444 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:42.444 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.444 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.444 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.444 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:42.444 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.444 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.444 [2024-11-19 12:34:47.512868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:42.444 [2024-11-19 12:34:47.512951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.444 [2024-11-19 12:34:47.512978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:42.444 [2024-11-19 12:34:47.512990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.444 [2024-11-19 12:34:47.513401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.444 [2024-11-19 12:34:47.513429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:42.444 [2024-11-19 12:34:47.513504] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:42.444 [2024-11-19 12:34:47.513524] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:42.444 [2024-11-19 12:34:47.513532] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:42.444 [2024-11-19 12:34:47.513547] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:42.444 BaseBdev1 00:14:42.444 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.444 12:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:43.384 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:43.384 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.384 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.384 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.384 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.384 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:43.384 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.384 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.384 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.384 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.384 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.384 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.384 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.384 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.384 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.384 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.384 "name": "raid_bdev1", 00:14:43.384 "uuid": "9f980c08-d22c-404e-b6e1-e568eb7ab5e5", 00:14:43.384 "strip_size_kb": 64, 00:14:43.384 "state": "online", 00:14:43.384 "raid_level": "raid5f", 00:14:43.384 "superblock": true, 00:14:43.384 "num_base_bdevs": 3, 00:14:43.384 "num_base_bdevs_discovered": 2, 00:14:43.385 "num_base_bdevs_operational": 2, 00:14:43.385 "base_bdevs_list": [ 00:14:43.385 { 00:14:43.385 "name": null, 00:14:43.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.385 "is_configured": false, 00:14:43.385 "data_offset": 0, 00:14:43.385 "data_size": 63488 00:14:43.385 }, 00:14:43.385 { 00:14:43.385 "name": "BaseBdev2", 00:14:43.385 "uuid": "d87e73f2-19a1-5a24-bb97-c3217d19b711", 00:14:43.385 "is_configured": true, 00:14:43.385 "data_offset": 2048, 00:14:43.385 "data_size": 63488 00:14:43.385 }, 00:14:43.385 { 00:14:43.385 "name": "BaseBdev3", 00:14:43.385 "uuid": "13c6a762-3008-5dc2-8653-71597b83b1ae", 00:14:43.385 "is_configured": true, 00:14:43.385 "data_offset": 2048, 00:14:43.385 "data_size": 63488 00:14:43.385 } 00:14:43.385 ] 00:14:43.385 }' 00:14:43.385 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.385 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.955 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:43.955 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.955 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:43.955 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:43.955 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.955 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.955 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.955 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.955 12:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.955 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.955 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.955 "name": "raid_bdev1", 00:14:43.955 "uuid": "9f980c08-d22c-404e-b6e1-e568eb7ab5e5", 00:14:43.955 "strip_size_kb": 64, 00:14:43.955 "state": "online", 00:14:43.955 "raid_level": "raid5f", 00:14:43.955 "superblock": true, 00:14:43.955 "num_base_bdevs": 3, 00:14:43.955 "num_base_bdevs_discovered": 2, 00:14:43.955 "num_base_bdevs_operational": 2, 00:14:43.955 "base_bdevs_list": [ 00:14:43.956 { 00:14:43.956 "name": null, 00:14:43.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.956 "is_configured": false, 00:14:43.956 "data_offset": 0, 00:14:43.956 "data_size": 63488 00:14:43.956 }, 00:14:43.956 { 00:14:43.956 "name": "BaseBdev2", 00:14:43.956 "uuid": "d87e73f2-19a1-5a24-bb97-c3217d19b711", 00:14:43.956 "is_configured": true, 00:14:43.956 "data_offset": 2048, 00:14:43.956 "data_size": 63488 00:14:43.956 }, 00:14:43.956 { 00:14:43.956 "name": "BaseBdev3", 00:14:43.956 "uuid": "13c6a762-3008-5dc2-8653-71597b83b1ae", 00:14:43.956 "is_configured": true, 00:14:43.956 "data_offset": 2048, 00:14:43.956 "data_size": 63488 00:14:43.956 } 00:14:43.956 ] 00:14:43.956 }' 00:14:43.956 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.956 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:43.956 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.956 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:43.956 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:43.956 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:43.956 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:43.956 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:43.956 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:43.956 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:43.956 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:43.956 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:43.956 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.956 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.956 [2024-11-19 12:34:49.146138] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:43.956 [2024-11-19 12:34:49.146336] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:43.956 [2024-11-19 12:34:49.146361] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:43.956 request: 00:14:43.956 { 00:14:43.956 "base_bdev": "BaseBdev1", 00:14:43.956 "raid_bdev": "raid_bdev1", 00:14:43.956 "method": "bdev_raid_add_base_bdev", 00:14:43.956 "req_id": 1 00:14:43.956 } 00:14:43.956 Got JSON-RPC error response 00:14:43.956 response: 00:14:43.956 { 00:14:43.956 "code": -22, 00:14:43.956 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:43.956 } 00:14:43.956 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:43.956 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:43.956 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:43.956 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:43.956 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:43.956 12:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:45.337 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:45.337 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.337 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.337 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.337 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.337 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:45.337 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.337 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.337 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.337 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.337 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.337 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.337 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.337 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.337 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.337 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.337 "name": "raid_bdev1", 00:14:45.337 "uuid": "9f980c08-d22c-404e-b6e1-e568eb7ab5e5", 00:14:45.337 "strip_size_kb": 64, 00:14:45.337 "state": "online", 00:14:45.337 "raid_level": "raid5f", 00:14:45.337 "superblock": true, 00:14:45.337 "num_base_bdevs": 3, 00:14:45.337 "num_base_bdevs_discovered": 2, 00:14:45.337 "num_base_bdevs_operational": 2, 00:14:45.337 "base_bdevs_list": [ 00:14:45.337 { 00:14:45.337 "name": null, 00:14:45.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.337 "is_configured": false, 00:14:45.337 "data_offset": 0, 00:14:45.337 "data_size": 63488 00:14:45.337 }, 00:14:45.337 { 00:14:45.337 "name": "BaseBdev2", 00:14:45.337 "uuid": "d87e73f2-19a1-5a24-bb97-c3217d19b711", 00:14:45.337 "is_configured": true, 00:14:45.337 "data_offset": 2048, 00:14:45.337 "data_size": 63488 00:14:45.337 }, 00:14:45.337 { 00:14:45.337 "name": "BaseBdev3", 00:14:45.337 "uuid": "13c6a762-3008-5dc2-8653-71597b83b1ae", 00:14:45.337 "is_configured": true, 00:14:45.337 "data_offset": 2048, 00:14:45.337 "data_size": 63488 00:14:45.337 } 00:14:45.337 ] 00:14:45.337 }' 00:14:45.337 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.337 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.598 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:45.598 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.598 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:45.598 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:45.598 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.598 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.598 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.598 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.598 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.598 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.598 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.598 "name": "raid_bdev1", 00:14:45.598 "uuid": "9f980c08-d22c-404e-b6e1-e568eb7ab5e5", 00:14:45.598 "strip_size_kb": 64, 00:14:45.598 "state": "online", 00:14:45.598 "raid_level": "raid5f", 00:14:45.598 "superblock": true, 00:14:45.598 "num_base_bdevs": 3, 00:14:45.598 "num_base_bdevs_discovered": 2, 00:14:45.598 "num_base_bdevs_operational": 2, 00:14:45.598 "base_bdevs_list": [ 00:14:45.598 { 00:14:45.598 "name": null, 00:14:45.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.598 "is_configured": false, 00:14:45.598 "data_offset": 0, 00:14:45.598 "data_size": 63488 00:14:45.598 }, 00:14:45.598 { 00:14:45.598 "name": "BaseBdev2", 00:14:45.598 "uuid": "d87e73f2-19a1-5a24-bb97-c3217d19b711", 00:14:45.598 "is_configured": true, 00:14:45.598 "data_offset": 2048, 00:14:45.598 "data_size": 63488 00:14:45.598 }, 00:14:45.598 { 00:14:45.598 "name": "BaseBdev3", 00:14:45.598 "uuid": "13c6a762-3008-5dc2-8653-71597b83b1ae", 00:14:45.598 "is_configured": true, 00:14:45.598 "data_offset": 2048, 00:14:45.598 "data_size": 63488 00:14:45.598 } 00:14:45.598 ] 00:14:45.598 }' 00:14:45.598 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.598 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:45.598 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.598 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:45.598 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 92730 00:14:45.598 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 92730 ']' 00:14:45.598 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 92730 00:14:45.598 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:45.598 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:45.598 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92730 00:14:45.598 killing process with pid 92730 00:14:45.598 Received shutdown signal, test time was about 60.000000 seconds 00:14:45.598 00:14:45.598 Latency(us) 00:14:45.598 [2024-11-19T12:34:50.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.598 [2024-11-19T12:34:50.859Z] =================================================================================================================== 00:14:45.598 [2024-11-19T12:34:50.859Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:45.598 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:45.598 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:45.598 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92730' 00:14:45.598 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 92730 00:14:45.598 [2024-11-19 12:34:50.765695] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:45.598 12:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 92730 00:14:45.598 [2024-11-19 12:34:50.765859] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.598 [2024-11-19 12:34:50.765935] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:45.598 [2024-11-19 12:34:50.765946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:14:45.598 [2024-11-19 12:34:50.807540] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:45.858 12:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:45.858 00:14:45.858 real 0m21.817s 00:14:45.858 user 0m28.398s 00:14:45.858 sys 0m2.893s 00:14:45.858 12:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:45.858 ************************************ 00:14:45.858 END TEST raid5f_rebuild_test_sb 00:14:45.858 ************************************ 00:14:45.858 12:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.858 12:34:51 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:45.858 12:34:51 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:14:45.858 12:34:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:45.858 12:34:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:45.858 12:34:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:46.119 ************************************ 00:14:46.119 START TEST raid5f_state_function_test 00:14:46.119 ************************************ 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=93470 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:46.119 Process raid pid: 93470 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93470' 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 93470 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 93470 ']' 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:46.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:46.119 12:34:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.119 [2024-11-19 12:34:51.226300] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:46.119 [2024-11-19 12:34:51.226462] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.379 [2024-11-19 12:34:51.396135] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.379 [2024-11-19 12:34:51.448103] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.379 [2024-11-19 12:34:51.489921] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:46.379 [2024-11-19 12:34:51.489969] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:46.950 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:46.950 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:14:46.950 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:46.950 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.951 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.951 [2024-11-19 12:34:52.099351] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:46.951 [2024-11-19 12:34:52.099418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:46.951 [2024-11-19 12:34:52.099430] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:46.951 [2024-11-19 12:34:52.099443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:46.951 [2024-11-19 12:34:52.099451] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:46.951 [2024-11-19 12:34:52.099464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:46.951 [2024-11-19 12:34:52.099471] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:46.951 [2024-11-19 12:34:52.099479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:46.951 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.951 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:46.951 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.951 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.951 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.951 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.951 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:46.951 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.951 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.951 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.951 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.951 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.951 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.951 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.951 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.951 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.951 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.951 "name": "Existed_Raid", 00:14:46.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.951 "strip_size_kb": 64, 00:14:46.951 "state": "configuring", 00:14:46.951 "raid_level": "raid5f", 00:14:46.951 "superblock": false, 00:14:46.951 "num_base_bdevs": 4, 00:14:46.951 "num_base_bdevs_discovered": 0, 00:14:46.951 "num_base_bdevs_operational": 4, 00:14:46.951 "base_bdevs_list": [ 00:14:46.951 { 00:14:46.951 "name": "BaseBdev1", 00:14:46.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.951 "is_configured": false, 00:14:46.951 "data_offset": 0, 00:14:46.951 "data_size": 0 00:14:46.951 }, 00:14:46.951 { 00:14:46.951 "name": "BaseBdev2", 00:14:46.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.951 "is_configured": false, 00:14:46.951 "data_offset": 0, 00:14:46.951 "data_size": 0 00:14:46.951 }, 00:14:46.951 { 00:14:46.951 "name": "BaseBdev3", 00:14:46.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.951 "is_configured": false, 00:14:46.951 "data_offset": 0, 00:14:46.951 "data_size": 0 00:14:46.951 }, 00:14:46.951 { 00:14:46.951 "name": "BaseBdev4", 00:14:46.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.951 "is_configured": false, 00:14:46.951 "data_offset": 0, 00:14:46.951 "data_size": 0 00:14:46.951 } 00:14:46.951 ] 00:14:46.951 }' 00:14:46.951 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.951 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.522 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:47.522 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.522 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.522 [2024-11-19 12:34:52.554657] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:47.522 [2024-11-19 12:34:52.554723] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:14:47.522 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.522 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:47.522 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.522 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.522 [2024-11-19 12:34:52.566675] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:47.522 [2024-11-19 12:34:52.566738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:47.522 [2024-11-19 12:34:52.566765] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:47.522 [2024-11-19 12:34:52.566775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:47.522 [2024-11-19 12:34:52.566781] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:47.522 [2024-11-19 12:34:52.566789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:47.522 [2024-11-19 12:34:52.566795] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:47.522 [2024-11-19 12:34:52.566803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:47.522 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.522 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:47.522 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.522 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.522 [2024-11-19 12:34:52.587418] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:47.522 BaseBdev1 00:14:47.522 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.522 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:47.522 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:47.522 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:47.522 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:47.522 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:47.523 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:47.523 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:47.523 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.523 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.523 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.523 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:47.523 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.523 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.523 [ 00:14:47.523 { 00:14:47.523 "name": "BaseBdev1", 00:14:47.523 "aliases": [ 00:14:47.523 "ac2a7a60-5219-42ae-b542-0c82f72649e9" 00:14:47.523 ], 00:14:47.523 "product_name": "Malloc disk", 00:14:47.523 "block_size": 512, 00:14:47.523 "num_blocks": 65536, 00:14:47.523 "uuid": "ac2a7a60-5219-42ae-b542-0c82f72649e9", 00:14:47.523 "assigned_rate_limits": { 00:14:47.523 "rw_ios_per_sec": 0, 00:14:47.523 "rw_mbytes_per_sec": 0, 00:14:47.523 "r_mbytes_per_sec": 0, 00:14:47.523 "w_mbytes_per_sec": 0 00:14:47.523 }, 00:14:47.523 "claimed": true, 00:14:47.523 "claim_type": "exclusive_write", 00:14:47.523 "zoned": false, 00:14:47.523 "supported_io_types": { 00:14:47.523 "read": true, 00:14:47.523 "write": true, 00:14:47.523 "unmap": true, 00:14:47.523 "flush": true, 00:14:47.523 "reset": true, 00:14:47.523 "nvme_admin": false, 00:14:47.523 "nvme_io": false, 00:14:47.523 "nvme_io_md": false, 00:14:47.523 "write_zeroes": true, 00:14:47.523 "zcopy": true, 00:14:47.523 "get_zone_info": false, 00:14:47.523 "zone_management": false, 00:14:47.523 "zone_append": false, 00:14:47.523 "compare": false, 00:14:47.523 "compare_and_write": false, 00:14:47.523 "abort": true, 00:14:47.523 "seek_hole": false, 00:14:47.523 "seek_data": false, 00:14:47.523 "copy": true, 00:14:47.523 "nvme_iov_md": false 00:14:47.523 }, 00:14:47.523 "memory_domains": [ 00:14:47.523 { 00:14:47.523 "dma_device_id": "system", 00:14:47.523 "dma_device_type": 1 00:14:47.523 }, 00:14:47.523 { 00:14:47.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.523 "dma_device_type": 2 00:14:47.523 } 00:14:47.523 ], 00:14:47.523 "driver_specific": {} 00:14:47.523 } 00:14:47.523 ] 00:14:47.523 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.523 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:47.523 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:47.523 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.523 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.523 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.523 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.523 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:47.523 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.523 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.523 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.523 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.523 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.523 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.523 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.523 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.523 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.523 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.523 "name": "Existed_Raid", 00:14:47.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.523 "strip_size_kb": 64, 00:14:47.523 "state": "configuring", 00:14:47.523 "raid_level": "raid5f", 00:14:47.523 "superblock": false, 00:14:47.523 "num_base_bdevs": 4, 00:14:47.523 "num_base_bdevs_discovered": 1, 00:14:47.523 "num_base_bdevs_operational": 4, 00:14:47.523 "base_bdevs_list": [ 00:14:47.523 { 00:14:47.523 "name": "BaseBdev1", 00:14:47.523 "uuid": "ac2a7a60-5219-42ae-b542-0c82f72649e9", 00:14:47.523 "is_configured": true, 00:14:47.523 "data_offset": 0, 00:14:47.523 "data_size": 65536 00:14:47.523 }, 00:14:47.523 { 00:14:47.523 "name": "BaseBdev2", 00:14:47.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.523 "is_configured": false, 00:14:47.523 "data_offset": 0, 00:14:47.523 "data_size": 0 00:14:47.523 }, 00:14:47.523 { 00:14:47.523 "name": "BaseBdev3", 00:14:47.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.523 "is_configured": false, 00:14:47.523 "data_offset": 0, 00:14:47.523 "data_size": 0 00:14:47.523 }, 00:14:47.523 { 00:14:47.523 "name": "BaseBdev4", 00:14:47.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.523 "is_configured": false, 00:14:47.523 "data_offset": 0, 00:14:47.523 "data_size": 0 00:14:47.523 } 00:14:47.523 ] 00:14:47.523 }' 00:14:47.523 12:34:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.523 12:34:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.094 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:48.094 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.094 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.094 [2024-11-19 12:34:53.098652] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:48.094 [2024-11-19 12:34:53.098730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:14:48.094 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.094 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:48.094 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.094 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.094 [2024-11-19 12:34:53.110666] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:48.094 [2024-11-19 12:34:53.112540] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:48.094 [2024-11-19 12:34:53.112589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:48.094 [2024-11-19 12:34:53.112599] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:48.094 [2024-11-19 12:34:53.112608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:48.094 [2024-11-19 12:34:53.112615] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:48.094 [2024-11-19 12:34:53.112623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:48.094 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.094 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:48.094 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:48.094 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:48.094 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.094 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.094 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.094 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.094 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:48.094 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.094 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.094 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.094 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.094 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.094 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.094 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.094 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.094 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.094 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.094 "name": "Existed_Raid", 00:14:48.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.094 "strip_size_kb": 64, 00:14:48.094 "state": "configuring", 00:14:48.094 "raid_level": "raid5f", 00:14:48.094 "superblock": false, 00:14:48.094 "num_base_bdevs": 4, 00:14:48.094 "num_base_bdevs_discovered": 1, 00:14:48.094 "num_base_bdevs_operational": 4, 00:14:48.094 "base_bdevs_list": [ 00:14:48.094 { 00:14:48.094 "name": "BaseBdev1", 00:14:48.094 "uuid": "ac2a7a60-5219-42ae-b542-0c82f72649e9", 00:14:48.094 "is_configured": true, 00:14:48.095 "data_offset": 0, 00:14:48.095 "data_size": 65536 00:14:48.095 }, 00:14:48.095 { 00:14:48.095 "name": "BaseBdev2", 00:14:48.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.095 "is_configured": false, 00:14:48.095 "data_offset": 0, 00:14:48.095 "data_size": 0 00:14:48.095 }, 00:14:48.095 { 00:14:48.095 "name": "BaseBdev3", 00:14:48.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.095 "is_configured": false, 00:14:48.095 "data_offset": 0, 00:14:48.095 "data_size": 0 00:14:48.095 }, 00:14:48.095 { 00:14:48.095 "name": "BaseBdev4", 00:14:48.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.095 "is_configured": false, 00:14:48.095 "data_offset": 0, 00:14:48.095 "data_size": 0 00:14:48.095 } 00:14:48.095 ] 00:14:48.095 }' 00:14:48.095 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.095 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.355 [2024-11-19 12:34:53.500373] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:48.355 BaseBdev2 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.355 [ 00:14:48.355 { 00:14:48.355 "name": "BaseBdev2", 00:14:48.355 "aliases": [ 00:14:48.355 "35338e92-5a29-41e0-ac27-3217e25742c1" 00:14:48.355 ], 00:14:48.355 "product_name": "Malloc disk", 00:14:48.355 "block_size": 512, 00:14:48.355 "num_blocks": 65536, 00:14:48.355 "uuid": "35338e92-5a29-41e0-ac27-3217e25742c1", 00:14:48.355 "assigned_rate_limits": { 00:14:48.355 "rw_ios_per_sec": 0, 00:14:48.355 "rw_mbytes_per_sec": 0, 00:14:48.355 "r_mbytes_per_sec": 0, 00:14:48.355 "w_mbytes_per_sec": 0 00:14:48.355 }, 00:14:48.355 "claimed": true, 00:14:48.355 "claim_type": "exclusive_write", 00:14:48.355 "zoned": false, 00:14:48.355 "supported_io_types": { 00:14:48.355 "read": true, 00:14:48.355 "write": true, 00:14:48.355 "unmap": true, 00:14:48.355 "flush": true, 00:14:48.355 "reset": true, 00:14:48.355 "nvme_admin": false, 00:14:48.355 "nvme_io": false, 00:14:48.355 "nvme_io_md": false, 00:14:48.355 "write_zeroes": true, 00:14:48.355 "zcopy": true, 00:14:48.355 "get_zone_info": false, 00:14:48.355 "zone_management": false, 00:14:48.355 "zone_append": false, 00:14:48.355 "compare": false, 00:14:48.355 "compare_and_write": false, 00:14:48.355 "abort": true, 00:14:48.355 "seek_hole": false, 00:14:48.355 "seek_data": false, 00:14:48.355 "copy": true, 00:14:48.355 "nvme_iov_md": false 00:14:48.355 }, 00:14:48.355 "memory_domains": [ 00:14:48.355 { 00:14:48.355 "dma_device_id": "system", 00:14:48.355 "dma_device_type": 1 00:14:48.355 }, 00:14:48.355 { 00:14:48.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.355 "dma_device_type": 2 00:14:48.355 } 00:14:48.355 ], 00:14:48.355 "driver_specific": {} 00:14:48.355 } 00:14:48.355 ] 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.355 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.355 "name": "Existed_Raid", 00:14:48.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.355 "strip_size_kb": 64, 00:14:48.355 "state": "configuring", 00:14:48.355 "raid_level": "raid5f", 00:14:48.355 "superblock": false, 00:14:48.355 "num_base_bdevs": 4, 00:14:48.355 "num_base_bdevs_discovered": 2, 00:14:48.355 "num_base_bdevs_operational": 4, 00:14:48.355 "base_bdevs_list": [ 00:14:48.355 { 00:14:48.355 "name": "BaseBdev1", 00:14:48.355 "uuid": "ac2a7a60-5219-42ae-b542-0c82f72649e9", 00:14:48.355 "is_configured": true, 00:14:48.356 "data_offset": 0, 00:14:48.356 "data_size": 65536 00:14:48.356 }, 00:14:48.356 { 00:14:48.356 "name": "BaseBdev2", 00:14:48.356 "uuid": "35338e92-5a29-41e0-ac27-3217e25742c1", 00:14:48.356 "is_configured": true, 00:14:48.356 "data_offset": 0, 00:14:48.356 "data_size": 65536 00:14:48.356 }, 00:14:48.356 { 00:14:48.356 "name": "BaseBdev3", 00:14:48.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.356 "is_configured": false, 00:14:48.356 "data_offset": 0, 00:14:48.356 "data_size": 0 00:14:48.356 }, 00:14:48.356 { 00:14:48.356 "name": "BaseBdev4", 00:14:48.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.356 "is_configured": false, 00:14:48.356 "data_offset": 0, 00:14:48.356 "data_size": 0 00:14:48.356 } 00:14:48.356 ] 00:14:48.356 }' 00:14:48.356 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.356 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.926 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:48.926 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.926 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.926 [2024-11-19 12:34:53.994512] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:48.926 BaseBdev3 00:14:48.926 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.926 12:34:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:48.926 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:48.926 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:48.926 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:48.926 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:48.926 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:48.926 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:48.926 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.926 12:34:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.926 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.926 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:48.926 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.926 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.926 [ 00:14:48.926 { 00:14:48.926 "name": "BaseBdev3", 00:14:48.926 "aliases": [ 00:14:48.926 "d652d871-7a17-4735-aab6-f75825488c86" 00:14:48.926 ], 00:14:48.926 "product_name": "Malloc disk", 00:14:48.926 "block_size": 512, 00:14:48.926 "num_blocks": 65536, 00:14:48.926 "uuid": "d652d871-7a17-4735-aab6-f75825488c86", 00:14:48.926 "assigned_rate_limits": { 00:14:48.926 "rw_ios_per_sec": 0, 00:14:48.926 "rw_mbytes_per_sec": 0, 00:14:48.926 "r_mbytes_per_sec": 0, 00:14:48.926 "w_mbytes_per_sec": 0 00:14:48.926 }, 00:14:48.926 "claimed": true, 00:14:48.926 "claim_type": "exclusive_write", 00:14:48.926 "zoned": false, 00:14:48.926 "supported_io_types": { 00:14:48.926 "read": true, 00:14:48.926 "write": true, 00:14:48.926 "unmap": true, 00:14:48.926 "flush": true, 00:14:48.926 "reset": true, 00:14:48.926 "nvme_admin": false, 00:14:48.926 "nvme_io": false, 00:14:48.926 "nvme_io_md": false, 00:14:48.926 "write_zeroes": true, 00:14:48.926 "zcopy": true, 00:14:48.926 "get_zone_info": false, 00:14:48.927 "zone_management": false, 00:14:48.927 "zone_append": false, 00:14:48.927 "compare": false, 00:14:48.927 "compare_and_write": false, 00:14:48.927 "abort": true, 00:14:48.927 "seek_hole": false, 00:14:48.927 "seek_data": false, 00:14:48.927 "copy": true, 00:14:48.927 "nvme_iov_md": false 00:14:48.927 }, 00:14:48.927 "memory_domains": [ 00:14:48.927 { 00:14:48.927 "dma_device_id": "system", 00:14:48.927 "dma_device_type": 1 00:14:48.927 }, 00:14:48.927 { 00:14:48.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.927 "dma_device_type": 2 00:14:48.927 } 00:14:48.927 ], 00:14:48.927 "driver_specific": {} 00:14:48.927 } 00:14:48.927 ] 00:14:48.927 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.927 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:48.927 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:48.927 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:48.927 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:48.927 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.927 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.927 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.927 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.927 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:48.927 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.927 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.927 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.927 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.927 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.927 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.927 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.927 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.927 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.927 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.927 "name": "Existed_Raid", 00:14:48.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.927 "strip_size_kb": 64, 00:14:48.927 "state": "configuring", 00:14:48.927 "raid_level": "raid5f", 00:14:48.927 "superblock": false, 00:14:48.927 "num_base_bdevs": 4, 00:14:48.927 "num_base_bdevs_discovered": 3, 00:14:48.927 "num_base_bdevs_operational": 4, 00:14:48.927 "base_bdevs_list": [ 00:14:48.927 { 00:14:48.927 "name": "BaseBdev1", 00:14:48.927 "uuid": "ac2a7a60-5219-42ae-b542-0c82f72649e9", 00:14:48.927 "is_configured": true, 00:14:48.927 "data_offset": 0, 00:14:48.927 "data_size": 65536 00:14:48.927 }, 00:14:48.927 { 00:14:48.927 "name": "BaseBdev2", 00:14:48.927 "uuid": "35338e92-5a29-41e0-ac27-3217e25742c1", 00:14:48.927 "is_configured": true, 00:14:48.927 "data_offset": 0, 00:14:48.927 "data_size": 65536 00:14:48.927 }, 00:14:48.927 { 00:14:48.927 "name": "BaseBdev3", 00:14:48.927 "uuid": "d652d871-7a17-4735-aab6-f75825488c86", 00:14:48.927 "is_configured": true, 00:14:48.927 "data_offset": 0, 00:14:48.927 "data_size": 65536 00:14:48.927 }, 00:14:48.927 { 00:14:48.927 "name": "BaseBdev4", 00:14:48.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.927 "is_configured": false, 00:14:48.927 "data_offset": 0, 00:14:48.927 "data_size": 0 00:14:48.927 } 00:14:48.927 ] 00:14:48.927 }' 00:14:48.927 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.927 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.498 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:49.498 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.498 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.498 [2024-11-19 12:34:54.468857] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:49.498 [2024-11-19 12:34:54.468927] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:14:49.498 [2024-11-19 12:34:54.468943] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:49.498 [2024-11-19 12:34:54.469238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:49.498 [2024-11-19 12:34:54.469690] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:14:49.498 [2024-11-19 12:34:54.469704] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:14:49.498 [2024-11-19 12:34:54.469904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.498 BaseBdev4 00:14:49.498 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.498 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:49.498 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:49.498 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:49.498 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:49.498 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:49.498 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:49.498 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:49.498 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.498 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.498 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.498 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:49.498 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.498 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.498 [ 00:14:49.498 { 00:14:49.498 "name": "BaseBdev4", 00:14:49.498 "aliases": [ 00:14:49.499 "0f1b6ae8-0994-44cb-a510-cb215c1c6e20" 00:14:49.499 ], 00:14:49.499 "product_name": "Malloc disk", 00:14:49.499 "block_size": 512, 00:14:49.499 "num_blocks": 65536, 00:14:49.499 "uuid": "0f1b6ae8-0994-44cb-a510-cb215c1c6e20", 00:14:49.499 "assigned_rate_limits": { 00:14:49.499 "rw_ios_per_sec": 0, 00:14:49.499 "rw_mbytes_per_sec": 0, 00:14:49.499 "r_mbytes_per_sec": 0, 00:14:49.499 "w_mbytes_per_sec": 0 00:14:49.499 }, 00:14:49.499 "claimed": true, 00:14:49.499 "claim_type": "exclusive_write", 00:14:49.499 "zoned": false, 00:14:49.499 "supported_io_types": { 00:14:49.499 "read": true, 00:14:49.499 "write": true, 00:14:49.499 "unmap": true, 00:14:49.499 "flush": true, 00:14:49.499 "reset": true, 00:14:49.499 "nvme_admin": false, 00:14:49.499 "nvme_io": false, 00:14:49.499 "nvme_io_md": false, 00:14:49.499 "write_zeroes": true, 00:14:49.499 "zcopy": true, 00:14:49.499 "get_zone_info": false, 00:14:49.499 "zone_management": false, 00:14:49.499 "zone_append": false, 00:14:49.499 "compare": false, 00:14:49.499 "compare_and_write": false, 00:14:49.499 "abort": true, 00:14:49.499 "seek_hole": false, 00:14:49.499 "seek_data": false, 00:14:49.499 "copy": true, 00:14:49.499 "nvme_iov_md": false 00:14:49.499 }, 00:14:49.499 "memory_domains": [ 00:14:49.499 { 00:14:49.499 "dma_device_id": "system", 00:14:49.499 "dma_device_type": 1 00:14:49.499 }, 00:14:49.499 { 00:14:49.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.499 "dma_device_type": 2 00:14:49.499 } 00:14:49.499 ], 00:14:49.499 "driver_specific": {} 00:14:49.499 } 00:14:49.499 ] 00:14:49.499 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.499 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:49.499 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:49.499 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:49.499 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:49.499 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.499 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.499 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.499 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.499 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.499 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.499 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.499 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.499 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.499 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.499 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.499 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.499 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.499 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.499 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.499 "name": "Existed_Raid", 00:14:49.499 "uuid": "1001aca9-1782-45ef-9606-52f9fcf7480d", 00:14:49.499 "strip_size_kb": 64, 00:14:49.499 "state": "online", 00:14:49.499 "raid_level": "raid5f", 00:14:49.499 "superblock": false, 00:14:49.499 "num_base_bdevs": 4, 00:14:49.499 "num_base_bdevs_discovered": 4, 00:14:49.499 "num_base_bdevs_operational": 4, 00:14:49.499 "base_bdevs_list": [ 00:14:49.499 { 00:14:49.499 "name": "BaseBdev1", 00:14:49.499 "uuid": "ac2a7a60-5219-42ae-b542-0c82f72649e9", 00:14:49.499 "is_configured": true, 00:14:49.499 "data_offset": 0, 00:14:49.499 "data_size": 65536 00:14:49.499 }, 00:14:49.499 { 00:14:49.499 "name": "BaseBdev2", 00:14:49.499 "uuid": "35338e92-5a29-41e0-ac27-3217e25742c1", 00:14:49.499 "is_configured": true, 00:14:49.499 "data_offset": 0, 00:14:49.499 "data_size": 65536 00:14:49.499 }, 00:14:49.499 { 00:14:49.499 "name": "BaseBdev3", 00:14:49.499 "uuid": "d652d871-7a17-4735-aab6-f75825488c86", 00:14:49.499 "is_configured": true, 00:14:49.499 "data_offset": 0, 00:14:49.499 "data_size": 65536 00:14:49.499 }, 00:14:49.499 { 00:14:49.499 "name": "BaseBdev4", 00:14:49.499 "uuid": "0f1b6ae8-0994-44cb-a510-cb215c1c6e20", 00:14:49.499 "is_configured": true, 00:14:49.499 "data_offset": 0, 00:14:49.499 "data_size": 65536 00:14:49.499 } 00:14:49.499 ] 00:14:49.499 }' 00:14:49.499 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.499 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.759 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:49.759 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:49.759 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:49.759 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:49.759 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:49.759 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:49.759 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:49.759 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.759 12:34:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:49.759 12:34:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.759 [2024-11-19 12:34:54.980310] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:49.759 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.759 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:49.759 "name": "Existed_Raid", 00:14:49.759 "aliases": [ 00:14:49.759 "1001aca9-1782-45ef-9606-52f9fcf7480d" 00:14:49.759 ], 00:14:49.759 "product_name": "Raid Volume", 00:14:49.759 "block_size": 512, 00:14:49.759 "num_blocks": 196608, 00:14:49.759 "uuid": "1001aca9-1782-45ef-9606-52f9fcf7480d", 00:14:49.759 "assigned_rate_limits": { 00:14:49.759 "rw_ios_per_sec": 0, 00:14:49.759 "rw_mbytes_per_sec": 0, 00:14:49.759 "r_mbytes_per_sec": 0, 00:14:49.760 "w_mbytes_per_sec": 0 00:14:49.760 }, 00:14:49.760 "claimed": false, 00:14:49.760 "zoned": false, 00:14:49.760 "supported_io_types": { 00:14:49.760 "read": true, 00:14:49.760 "write": true, 00:14:49.760 "unmap": false, 00:14:49.760 "flush": false, 00:14:49.760 "reset": true, 00:14:49.760 "nvme_admin": false, 00:14:49.760 "nvme_io": false, 00:14:49.760 "nvme_io_md": false, 00:14:49.760 "write_zeroes": true, 00:14:49.760 "zcopy": false, 00:14:49.760 "get_zone_info": false, 00:14:49.760 "zone_management": false, 00:14:49.760 "zone_append": false, 00:14:49.760 "compare": false, 00:14:49.760 "compare_and_write": false, 00:14:49.760 "abort": false, 00:14:49.760 "seek_hole": false, 00:14:49.760 "seek_data": false, 00:14:49.760 "copy": false, 00:14:49.760 "nvme_iov_md": false 00:14:49.760 }, 00:14:49.760 "driver_specific": { 00:14:49.760 "raid": { 00:14:49.760 "uuid": "1001aca9-1782-45ef-9606-52f9fcf7480d", 00:14:49.760 "strip_size_kb": 64, 00:14:49.760 "state": "online", 00:14:49.760 "raid_level": "raid5f", 00:14:49.760 "superblock": false, 00:14:49.760 "num_base_bdevs": 4, 00:14:49.760 "num_base_bdevs_discovered": 4, 00:14:49.760 "num_base_bdevs_operational": 4, 00:14:49.760 "base_bdevs_list": [ 00:14:49.760 { 00:14:49.760 "name": "BaseBdev1", 00:14:49.760 "uuid": "ac2a7a60-5219-42ae-b542-0c82f72649e9", 00:14:49.760 "is_configured": true, 00:14:49.760 "data_offset": 0, 00:14:49.760 "data_size": 65536 00:14:49.760 }, 00:14:49.760 { 00:14:49.760 "name": "BaseBdev2", 00:14:49.760 "uuid": "35338e92-5a29-41e0-ac27-3217e25742c1", 00:14:49.760 "is_configured": true, 00:14:49.760 "data_offset": 0, 00:14:49.760 "data_size": 65536 00:14:49.760 }, 00:14:49.760 { 00:14:49.760 "name": "BaseBdev3", 00:14:49.760 "uuid": "d652d871-7a17-4735-aab6-f75825488c86", 00:14:49.760 "is_configured": true, 00:14:49.760 "data_offset": 0, 00:14:49.760 "data_size": 65536 00:14:49.760 }, 00:14:49.760 { 00:14:49.760 "name": "BaseBdev4", 00:14:49.760 "uuid": "0f1b6ae8-0994-44cb-a510-cb215c1c6e20", 00:14:49.760 "is_configured": true, 00:14:49.760 "data_offset": 0, 00:14:49.760 "data_size": 65536 00:14:49.760 } 00:14:49.760 ] 00:14:49.760 } 00:14:49.760 } 00:14:49.760 }' 00:14:50.020 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:50.020 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:50.020 BaseBdev2 00:14:50.020 BaseBdev3 00:14:50.020 BaseBdev4' 00:14:50.020 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.020 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:50.020 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.020 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:50.020 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.020 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.020 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.020 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.020 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.020 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.020 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.020 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:50.020 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.020 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.021 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.021 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.021 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.021 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.021 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.021 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.021 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:50.021 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.021 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.021 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.021 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.021 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.021 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.021 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:50.021 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.021 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.021 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.021 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.281 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.281 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.281 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:50.281 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.281 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.281 [2024-11-19 12:34:55.295584] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:50.281 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.281 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:50.281 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:50.281 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:50.281 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:50.281 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:50.281 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:50.281 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.281 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.281 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.281 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.281 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.281 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.281 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.281 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.281 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.281 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.281 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.281 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.281 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.281 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.281 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.281 "name": "Existed_Raid", 00:14:50.281 "uuid": "1001aca9-1782-45ef-9606-52f9fcf7480d", 00:14:50.281 "strip_size_kb": 64, 00:14:50.281 "state": "online", 00:14:50.281 "raid_level": "raid5f", 00:14:50.281 "superblock": false, 00:14:50.281 "num_base_bdevs": 4, 00:14:50.281 "num_base_bdevs_discovered": 3, 00:14:50.281 "num_base_bdevs_operational": 3, 00:14:50.281 "base_bdevs_list": [ 00:14:50.281 { 00:14:50.281 "name": null, 00:14:50.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.281 "is_configured": false, 00:14:50.281 "data_offset": 0, 00:14:50.281 "data_size": 65536 00:14:50.281 }, 00:14:50.281 { 00:14:50.281 "name": "BaseBdev2", 00:14:50.281 "uuid": "35338e92-5a29-41e0-ac27-3217e25742c1", 00:14:50.281 "is_configured": true, 00:14:50.281 "data_offset": 0, 00:14:50.281 "data_size": 65536 00:14:50.281 }, 00:14:50.281 { 00:14:50.281 "name": "BaseBdev3", 00:14:50.281 "uuid": "d652d871-7a17-4735-aab6-f75825488c86", 00:14:50.281 "is_configured": true, 00:14:50.281 "data_offset": 0, 00:14:50.281 "data_size": 65536 00:14:50.281 }, 00:14:50.281 { 00:14:50.281 "name": "BaseBdev4", 00:14:50.281 "uuid": "0f1b6ae8-0994-44cb-a510-cb215c1c6e20", 00:14:50.281 "is_configured": true, 00:14:50.281 "data_offset": 0, 00:14:50.281 "data_size": 65536 00:14:50.281 } 00:14:50.281 ] 00:14:50.281 }' 00:14:50.281 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.281 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.541 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:50.541 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:50.541 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.541 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.541 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.541 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:50.541 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.802 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:50.802 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:50.802 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:50.802 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.802 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.802 [2024-11-19 12:34:55.813961] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:50.802 [2024-11-19 12:34:55.814184] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:50.802 [2024-11-19 12:34:55.825217] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.802 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.802 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:50.802 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:50.802 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:50.802 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.802 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.802 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.802 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.802 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:50.802 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:50.802 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:50.802 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.802 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.802 [2024-11-19 12:34:55.881181] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:50.802 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.802 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:50.802 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:50.802 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.802 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.802 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.802 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:50.802 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.803 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:50.803 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:50.803 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:50.803 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.803 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.803 [2024-11-19 12:34:55.952265] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:50.803 [2024-11-19 12:34:55.952331] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:14:50.803 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.803 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:50.803 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:50.803 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.803 12:34:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:50.803 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.803 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.803 12:34:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.803 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:50.803 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:50.803 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:50.803 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:50.803 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:50.803 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:50.803 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.803 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.803 BaseBdev2 00:14:50.803 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.803 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:50.803 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:50.803 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:50.803 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:50.803 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:50.803 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:50.803 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:50.803 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.803 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.803 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.803 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:50.803 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.803 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.803 [ 00:14:50.803 { 00:14:50.803 "name": "BaseBdev2", 00:14:50.803 "aliases": [ 00:14:50.803 "340a5bb6-f3e4-4fe7-a3e0-887d1eef00df" 00:14:50.803 ], 00:14:50.803 "product_name": "Malloc disk", 00:14:50.803 "block_size": 512, 00:14:50.803 "num_blocks": 65536, 00:14:50.803 "uuid": "340a5bb6-f3e4-4fe7-a3e0-887d1eef00df", 00:14:50.803 "assigned_rate_limits": { 00:14:50.803 "rw_ios_per_sec": 0, 00:14:50.803 "rw_mbytes_per_sec": 0, 00:14:50.803 "r_mbytes_per_sec": 0, 00:14:51.064 "w_mbytes_per_sec": 0 00:14:51.064 }, 00:14:51.064 "claimed": false, 00:14:51.064 "zoned": false, 00:14:51.064 "supported_io_types": { 00:14:51.064 "read": true, 00:14:51.064 "write": true, 00:14:51.064 "unmap": true, 00:14:51.064 "flush": true, 00:14:51.064 "reset": true, 00:14:51.064 "nvme_admin": false, 00:14:51.064 "nvme_io": false, 00:14:51.064 "nvme_io_md": false, 00:14:51.064 "write_zeroes": true, 00:14:51.064 "zcopy": true, 00:14:51.064 "get_zone_info": false, 00:14:51.064 "zone_management": false, 00:14:51.064 "zone_append": false, 00:14:51.064 "compare": false, 00:14:51.064 "compare_and_write": false, 00:14:51.064 "abort": true, 00:14:51.064 "seek_hole": false, 00:14:51.064 "seek_data": false, 00:14:51.064 "copy": true, 00:14:51.064 "nvme_iov_md": false 00:14:51.064 }, 00:14:51.064 "memory_domains": [ 00:14:51.064 { 00:14:51.064 "dma_device_id": "system", 00:14:51.064 "dma_device_type": 1 00:14:51.064 }, 00:14:51.064 { 00:14:51.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.064 "dma_device_type": 2 00:14:51.064 } 00:14:51.064 ], 00:14:51.064 "driver_specific": {} 00:14:51.064 } 00:14:51.064 ] 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.065 BaseBdev3 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.065 [ 00:14:51.065 { 00:14:51.065 "name": "BaseBdev3", 00:14:51.065 "aliases": [ 00:14:51.065 "8f42b912-f996-44d8-9914-47cded91ddf9" 00:14:51.065 ], 00:14:51.065 "product_name": "Malloc disk", 00:14:51.065 "block_size": 512, 00:14:51.065 "num_blocks": 65536, 00:14:51.065 "uuid": "8f42b912-f996-44d8-9914-47cded91ddf9", 00:14:51.065 "assigned_rate_limits": { 00:14:51.065 "rw_ios_per_sec": 0, 00:14:51.065 "rw_mbytes_per_sec": 0, 00:14:51.065 "r_mbytes_per_sec": 0, 00:14:51.065 "w_mbytes_per_sec": 0 00:14:51.065 }, 00:14:51.065 "claimed": false, 00:14:51.065 "zoned": false, 00:14:51.065 "supported_io_types": { 00:14:51.065 "read": true, 00:14:51.065 "write": true, 00:14:51.065 "unmap": true, 00:14:51.065 "flush": true, 00:14:51.065 "reset": true, 00:14:51.065 "nvme_admin": false, 00:14:51.065 "nvme_io": false, 00:14:51.065 "nvme_io_md": false, 00:14:51.065 "write_zeroes": true, 00:14:51.065 "zcopy": true, 00:14:51.065 "get_zone_info": false, 00:14:51.065 "zone_management": false, 00:14:51.065 "zone_append": false, 00:14:51.065 "compare": false, 00:14:51.065 "compare_and_write": false, 00:14:51.065 "abort": true, 00:14:51.065 "seek_hole": false, 00:14:51.065 "seek_data": false, 00:14:51.065 "copy": true, 00:14:51.065 "nvme_iov_md": false 00:14:51.065 }, 00:14:51.065 "memory_domains": [ 00:14:51.065 { 00:14:51.065 "dma_device_id": "system", 00:14:51.065 "dma_device_type": 1 00:14:51.065 }, 00:14:51.065 { 00:14:51.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.065 "dma_device_type": 2 00:14:51.065 } 00:14:51.065 ], 00:14:51.065 "driver_specific": {} 00:14:51.065 } 00:14:51.065 ] 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.065 BaseBdev4 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.065 [ 00:14:51.065 { 00:14:51.065 "name": "BaseBdev4", 00:14:51.065 "aliases": [ 00:14:51.065 "9f3d8587-9f11-49d7-95f1-e97f558722ae" 00:14:51.065 ], 00:14:51.065 "product_name": "Malloc disk", 00:14:51.065 "block_size": 512, 00:14:51.065 "num_blocks": 65536, 00:14:51.065 "uuid": "9f3d8587-9f11-49d7-95f1-e97f558722ae", 00:14:51.065 "assigned_rate_limits": { 00:14:51.065 "rw_ios_per_sec": 0, 00:14:51.065 "rw_mbytes_per_sec": 0, 00:14:51.065 "r_mbytes_per_sec": 0, 00:14:51.065 "w_mbytes_per_sec": 0 00:14:51.065 }, 00:14:51.065 "claimed": false, 00:14:51.065 "zoned": false, 00:14:51.065 "supported_io_types": { 00:14:51.065 "read": true, 00:14:51.065 "write": true, 00:14:51.065 "unmap": true, 00:14:51.065 "flush": true, 00:14:51.065 "reset": true, 00:14:51.065 "nvme_admin": false, 00:14:51.065 "nvme_io": false, 00:14:51.065 "nvme_io_md": false, 00:14:51.065 "write_zeroes": true, 00:14:51.065 "zcopy": true, 00:14:51.065 "get_zone_info": false, 00:14:51.065 "zone_management": false, 00:14:51.065 "zone_append": false, 00:14:51.065 "compare": false, 00:14:51.065 "compare_and_write": false, 00:14:51.065 "abort": true, 00:14:51.065 "seek_hole": false, 00:14:51.065 "seek_data": false, 00:14:51.065 "copy": true, 00:14:51.065 "nvme_iov_md": false 00:14:51.065 }, 00:14:51.065 "memory_domains": [ 00:14:51.065 { 00:14:51.065 "dma_device_id": "system", 00:14:51.065 "dma_device_type": 1 00:14:51.065 }, 00:14:51.065 { 00:14:51.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.065 "dma_device_type": 2 00:14:51.065 } 00:14:51.065 ], 00:14:51.065 "driver_specific": {} 00:14:51.065 } 00:14:51.065 ] 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:51.065 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:51.066 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:51.066 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:51.066 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.066 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.066 [2024-11-19 12:34:56.193945] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:51.066 [2024-11-19 12:34:56.194090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:51.066 [2024-11-19 12:34:56.194119] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:51.066 [2024-11-19 12:34:56.196019] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:51.066 [2024-11-19 12:34:56.196072] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:51.066 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.066 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:51.066 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.066 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.066 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.066 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.066 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:51.066 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.066 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.066 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.066 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.066 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.066 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.066 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.066 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.066 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.066 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.066 "name": "Existed_Raid", 00:14:51.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.066 "strip_size_kb": 64, 00:14:51.066 "state": "configuring", 00:14:51.066 "raid_level": "raid5f", 00:14:51.066 "superblock": false, 00:14:51.066 "num_base_bdevs": 4, 00:14:51.066 "num_base_bdevs_discovered": 3, 00:14:51.066 "num_base_bdevs_operational": 4, 00:14:51.066 "base_bdevs_list": [ 00:14:51.066 { 00:14:51.066 "name": "BaseBdev1", 00:14:51.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.066 "is_configured": false, 00:14:51.066 "data_offset": 0, 00:14:51.066 "data_size": 0 00:14:51.066 }, 00:14:51.066 { 00:14:51.066 "name": "BaseBdev2", 00:14:51.066 "uuid": "340a5bb6-f3e4-4fe7-a3e0-887d1eef00df", 00:14:51.066 "is_configured": true, 00:14:51.066 "data_offset": 0, 00:14:51.066 "data_size": 65536 00:14:51.066 }, 00:14:51.066 { 00:14:51.066 "name": "BaseBdev3", 00:14:51.066 "uuid": "8f42b912-f996-44d8-9914-47cded91ddf9", 00:14:51.066 "is_configured": true, 00:14:51.066 "data_offset": 0, 00:14:51.066 "data_size": 65536 00:14:51.066 }, 00:14:51.066 { 00:14:51.066 "name": "BaseBdev4", 00:14:51.066 "uuid": "9f3d8587-9f11-49d7-95f1-e97f558722ae", 00:14:51.066 "is_configured": true, 00:14:51.066 "data_offset": 0, 00:14:51.066 "data_size": 65536 00:14:51.066 } 00:14:51.066 ] 00:14:51.066 }' 00:14:51.066 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.066 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.637 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:51.637 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.637 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.637 [2024-11-19 12:34:56.649262] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:51.637 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.637 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:51.637 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.638 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.638 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.638 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.638 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:51.638 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.638 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.638 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.638 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.638 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.638 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.638 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.638 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.638 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.638 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.638 "name": "Existed_Raid", 00:14:51.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.638 "strip_size_kb": 64, 00:14:51.638 "state": "configuring", 00:14:51.638 "raid_level": "raid5f", 00:14:51.638 "superblock": false, 00:14:51.638 "num_base_bdevs": 4, 00:14:51.638 "num_base_bdevs_discovered": 2, 00:14:51.638 "num_base_bdevs_operational": 4, 00:14:51.638 "base_bdevs_list": [ 00:14:51.638 { 00:14:51.638 "name": "BaseBdev1", 00:14:51.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.638 "is_configured": false, 00:14:51.638 "data_offset": 0, 00:14:51.638 "data_size": 0 00:14:51.638 }, 00:14:51.638 { 00:14:51.638 "name": null, 00:14:51.638 "uuid": "340a5bb6-f3e4-4fe7-a3e0-887d1eef00df", 00:14:51.638 "is_configured": false, 00:14:51.638 "data_offset": 0, 00:14:51.638 "data_size": 65536 00:14:51.638 }, 00:14:51.638 { 00:14:51.638 "name": "BaseBdev3", 00:14:51.638 "uuid": "8f42b912-f996-44d8-9914-47cded91ddf9", 00:14:51.638 "is_configured": true, 00:14:51.638 "data_offset": 0, 00:14:51.638 "data_size": 65536 00:14:51.638 }, 00:14:51.638 { 00:14:51.638 "name": "BaseBdev4", 00:14:51.638 "uuid": "9f3d8587-9f11-49d7-95f1-e97f558722ae", 00:14:51.638 "is_configured": true, 00:14:51.638 "data_offset": 0, 00:14:51.638 "data_size": 65536 00:14:51.638 } 00:14:51.638 ] 00:14:51.638 }' 00:14:51.638 12:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.638 12:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.898 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:51.898 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.898 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.898 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.898 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.898 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:51.898 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:51.898 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.898 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.898 [2024-11-19 12:34:57.143372] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:51.898 BaseBdev1 00:14:51.898 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.898 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:51.899 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:51.899 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:51.899 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:51.899 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:51.899 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:51.899 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:51.899 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.899 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.159 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.159 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:52.159 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.159 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.159 [ 00:14:52.159 { 00:14:52.159 "name": "BaseBdev1", 00:14:52.159 "aliases": [ 00:14:52.159 "cc27187a-8ab8-46b7-b4b4-a6d6642a3226" 00:14:52.159 ], 00:14:52.159 "product_name": "Malloc disk", 00:14:52.159 "block_size": 512, 00:14:52.159 "num_blocks": 65536, 00:14:52.159 "uuid": "cc27187a-8ab8-46b7-b4b4-a6d6642a3226", 00:14:52.159 "assigned_rate_limits": { 00:14:52.159 "rw_ios_per_sec": 0, 00:14:52.159 "rw_mbytes_per_sec": 0, 00:14:52.159 "r_mbytes_per_sec": 0, 00:14:52.159 "w_mbytes_per_sec": 0 00:14:52.159 }, 00:14:52.159 "claimed": true, 00:14:52.159 "claim_type": "exclusive_write", 00:14:52.159 "zoned": false, 00:14:52.159 "supported_io_types": { 00:14:52.159 "read": true, 00:14:52.159 "write": true, 00:14:52.159 "unmap": true, 00:14:52.159 "flush": true, 00:14:52.159 "reset": true, 00:14:52.159 "nvme_admin": false, 00:14:52.159 "nvme_io": false, 00:14:52.159 "nvme_io_md": false, 00:14:52.159 "write_zeroes": true, 00:14:52.159 "zcopy": true, 00:14:52.159 "get_zone_info": false, 00:14:52.159 "zone_management": false, 00:14:52.159 "zone_append": false, 00:14:52.159 "compare": false, 00:14:52.159 "compare_and_write": false, 00:14:52.159 "abort": true, 00:14:52.159 "seek_hole": false, 00:14:52.159 "seek_data": false, 00:14:52.159 "copy": true, 00:14:52.159 "nvme_iov_md": false 00:14:52.159 }, 00:14:52.159 "memory_domains": [ 00:14:52.159 { 00:14:52.159 "dma_device_id": "system", 00:14:52.159 "dma_device_type": 1 00:14:52.159 }, 00:14:52.159 { 00:14:52.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.159 "dma_device_type": 2 00:14:52.159 } 00:14:52.159 ], 00:14:52.159 "driver_specific": {} 00:14:52.159 } 00:14:52.159 ] 00:14:52.159 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.159 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:52.159 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:52.159 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.159 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.159 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.159 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.159 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.159 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.159 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.159 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.159 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.159 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.159 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.159 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.159 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.159 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.159 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.159 "name": "Existed_Raid", 00:14:52.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.159 "strip_size_kb": 64, 00:14:52.159 "state": "configuring", 00:14:52.159 "raid_level": "raid5f", 00:14:52.159 "superblock": false, 00:14:52.159 "num_base_bdevs": 4, 00:14:52.159 "num_base_bdevs_discovered": 3, 00:14:52.159 "num_base_bdevs_operational": 4, 00:14:52.159 "base_bdevs_list": [ 00:14:52.159 { 00:14:52.159 "name": "BaseBdev1", 00:14:52.159 "uuid": "cc27187a-8ab8-46b7-b4b4-a6d6642a3226", 00:14:52.159 "is_configured": true, 00:14:52.159 "data_offset": 0, 00:14:52.159 "data_size": 65536 00:14:52.159 }, 00:14:52.159 { 00:14:52.159 "name": null, 00:14:52.159 "uuid": "340a5bb6-f3e4-4fe7-a3e0-887d1eef00df", 00:14:52.159 "is_configured": false, 00:14:52.159 "data_offset": 0, 00:14:52.159 "data_size": 65536 00:14:52.159 }, 00:14:52.159 { 00:14:52.159 "name": "BaseBdev3", 00:14:52.159 "uuid": "8f42b912-f996-44d8-9914-47cded91ddf9", 00:14:52.159 "is_configured": true, 00:14:52.159 "data_offset": 0, 00:14:52.159 "data_size": 65536 00:14:52.159 }, 00:14:52.159 { 00:14:52.159 "name": "BaseBdev4", 00:14:52.159 "uuid": "9f3d8587-9f11-49d7-95f1-e97f558722ae", 00:14:52.159 "is_configured": true, 00:14:52.159 "data_offset": 0, 00:14:52.159 "data_size": 65536 00:14:52.159 } 00:14:52.159 ] 00:14:52.159 }' 00:14:52.159 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.159 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.419 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:52.419 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.419 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.420 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.420 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.680 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:52.680 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:52.680 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.680 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.680 [2024-11-19 12:34:57.690564] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:52.680 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.680 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:52.680 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.680 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.680 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.680 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.680 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.680 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.680 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.680 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.680 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.680 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.680 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.680 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.680 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.680 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.680 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.680 "name": "Existed_Raid", 00:14:52.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.680 "strip_size_kb": 64, 00:14:52.680 "state": "configuring", 00:14:52.680 "raid_level": "raid5f", 00:14:52.680 "superblock": false, 00:14:52.680 "num_base_bdevs": 4, 00:14:52.680 "num_base_bdevs_discovered": 2, 00:14:52.680 "num_base_bdevs_operational": 4, 00:14:52.680 "base_bdevs_list": [ 00:14:52.680 { 00:14:52.680 "name": "BaseBdev1", 00:14:52.680 "uuid": "cc27187a-8ab8-46b7-b4b4-a6d6642a3226", 00:14:52.680 "is_configured": true, 00:14:52.680 "data_offset": 0, 00:14:52.680 "data_size": 65536 00:14:52.680 }, 00:14:52.680 { 00:14:52.680 "name": null, 00:14:52.680 "uuid": "340a5bb6-f3e4-4fe7-a3e0-887d1eef00df", 00:14:52.680 "is_configured": false, 00:14:52.680 "data_offset": 0, 00:14:52.680 "data_size": 65536 00:14:52.680 }, 00:14:52.680 { 00:14:52.680 "name": null, 00:14:52.680 "uuid": "8f42b912-f996-44d8-9914-47cded91ddf9", 00:14:52.680 "is_configured": false, 00:14:52.680 "data_offset": 0, 00:14:52.680 "data_size": 65536 00:14:52.680 }, 00:14:52.680 { 00:14:52.680 "name": "BaseBdev4", 00:14:52.680 "uuid": "9f3d8587-9f11-49d7-95f1-e97f558722ae", 00:14:52.680 "is_configured": true, 00:14:52.680 "data_offset": 0, 00:14:52.680 "data_size": 65536 00:14:52.680 } 00:14:52.680 ] 00:14:52.680 }' 00:14:52.680 12:34:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.680 12:34:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.940 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.940 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:52.940 12:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.940 12:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.940 12:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.940 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:52.940 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:52.940 12:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.940 12:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.940 [2024-11-19 12:34:58.157855] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:52.940 12:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.940 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:52.940 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.940 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.940 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.940 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.940 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.940 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.940 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.940 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.940 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.940 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.940 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.940 12:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.940 12:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.940 12:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.201 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.201 "name": "Existed_Raid", 00:14:53.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.201 "strip_size_kb": 64, 00:14:53.201 "state": "configuring", 00:14:53.201 "raid_level": "raid5f", 00:14:53.201 "superblock": false, 00:14:53.201 "num_base_bdevs": 4, 00:14:53.201 "num_base_bdevs_discovered": 3, 00:14:53.201 "num_base_bdevs_operational": 4, 00:14:53.201 "base_bdevs_list": [ 00:14:53.201 { 00:14:53.201 "name": "BaseBdev1", 00:14:53.201 "uuid": "cc27187a-8ab8-46b7-b4b4-a6d6642a3226", 00:14:53.201 "is_configured": true, 00:14:53.201 "data_offset": 0, 00:14:53.201 "data_size": 65536 00:14:53.201 }, 00:14:53.201 { 00:14:53.201 "name": null, 00:14:53.201 "uuid": "340a5bb6-f3e4-4fe7-a3e0-887d1eef00df", 00:14:53.201 "is_configured": false, 00:14:53.201 "data_offset": 0, 00:14:53.201 "data_size": 65536 00:14:53.201 }, 00:14:53.201 { 00:14:53.201 "name": "BaseBdev3", 00:14:53.201 "uuid": "8f42b912-f996-44d8-9914-47cded91ddf9", 00:14:53.201 "is_configured": true, 00:14:53.201 "data_offset": 0, 00:14:53.201 "data_size": 65536 00:14:53.201 }, 00:14:53.201 { 00:14:53.201 "name": "BaseBdev4", 00:14:53.201 "uuid": "9f3d8587-9f11-49d7-95f1-e97f558722ae", 00:14:53.201 "is_configured": true, 00:14:53.201 "data_offset": 0, 00:14:53.201 "data_size": 65536 00:14:53.201 } 00:14:53.201 ] 00:14:53.201 }' 00:14:53.202 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.202 12:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.462 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.462 12:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.462 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:53.462 12:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.462 12:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.462 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:53.462 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:53.463 12:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.463 12:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.463 [2024-11-19 12:34:58.657009] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:53.463 12:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.463 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:53.463 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.463 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.463 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.463 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.463 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:53.463 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.463 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.463 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.463 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.463 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.463 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.463 12:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.463 12:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.463 12:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.463 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.463 "name": "Existed_Raid", 00:14:53.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.463 "strip_size_kb": 64, 00:14:53.463 "state": "configuring", 00:14:53.463 "raid_level": "raid5f", 00:14:53.463 "superblock": false, 00:14:53.463 "num_base_bdevs": 4, 00:14:53.463 "num_base_bdevs_discovered": 2, 00:14:53.463 "num_base_bdevs_operational": 4, 00:14:53.463 "base_bdevs_list": [ 00:14:53.463 { 00:14:53.463 "name": null, 00:14:53.463 "uuid": "cc27187a-8ab8-46b7-b4b4-a6d6642a3226", 00:14:53.463 "is_configured": false, 00:14:53.463 "data_offset": 0, 00:14:53.463 "data_size": 65536 00:14:53.463 }, 00:14:53.463 { 00:14:53.463 "name": null, 00:14:53.463 "uuid": "340a5bb6-f3e4-4fe7-a3e0-887d1eef00df", 00:14:53.463 "is_configured": false, 00:14:53.463 "data_offset": 0, 00:14:53.463 "data_size": 65536 00:14:53.463 }, 00:14:53.463 { 00:14:53.463 "name": "BaseBdev3", 00:14:53.463 "uuid": "8f42b912-f996-44d8-9914-47cded91ddf9", 00:14:53.463 "is_configured": true, 00:14:53.463 "data_offset": 0, 00:14:53.463 "data_size": 65536 00:14:53.463 }, 00:14:53.463 { 00:14:53.463 "name": "BaseBdev4", 00:14:53.463 "uuid": "9f3d8587-9f11-49d7-95f1-e97f558722ae", 00:14:53.463 "is_configured": true, 00:14:53.463 "data_offset": 0, 00:14:53.463 "data_size": 65536 00:14:53.463 } 00:14:53.463 ] 00:14:53.463 }' 00:14:53.463 12:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.463 12:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.040 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.040 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.040 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.040 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:54.040 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.040 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:54.040 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:54.040 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.040 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.040 [2024-11-19 12:34:59.150859] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:54.040 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.040 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:54.040 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.040 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.040 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.040 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.040 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:54.040 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.040 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.040 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.040 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.040 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.040 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.040 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.040 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.040 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.040 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.040 "name": "Existed_Raid", 00:14:54.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.040 "strip_size_kb": 64, 00:14:54.040 "state": "configuring", 00:14:54.040 "raid_level": "raid5f", 00:14:54.040 "superblock": false, 00:14:54.040 "num_base_bdevs": 4, 00:14:54.040 "num_base_bdevs_discovered": 3, 00:14:54.040 "num_base_bdevs_operational": 4, 00:14:54.040 "base_bdevs_list": [ 00:14:54.040 { 00:14:54.040 "name": null, 00:14:54.040 "uuid": "cc27187a-8ab8-46b7-b4b4-a6d6642a3226", 00:14:54.040 "is_configured": false, 00:14:54.040 "data_offset": 0, 00:14:54.040 "data_size": 65536 00:14:54.040 }, 00:14:54.040 { 00:14:54.040 "name": "BaseBdev2", 00:14:54.040 "uuid": "340a5bb6-f3e4-4fe7-a3e0-887d1eef00df", 00:14:54.040 "is_configured": true, 00:14:54.040 "data_offset": 0, 00:14:54.040 "data_size": 65536 00:14:54.040 }, 00:14:54.040 { 00:14:54.040 "name": "BaseBdev3", 00:14:54.040 "uuid": "8f42b912-f996-44d8-9914-47cded91ddf9", 00:14:54.040 "is_configured": true, 00:14:54.040 "data_offset": 0, 00:14:54.040 "data_size": 65536 00:14:54.040 }, 00:14:54.040 { 00:14:54.040 "name": "BaseBdev4", 00:14:54.040 "uuid": "9f3d8587-9f11-49d7-95f1-e97f558722ae", 00:14:54.040 "is_configured": true, 00:14:54.040 "data_offset": 0, 00:14:54.040 "data_size": 65536 00:14:54.040 } 00:14:54.040 ] 00:14:54.040 }' 00:14:54.040 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.040 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.645 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.645 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:54.645 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.645 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.645 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.645 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:54.645 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.645 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:54.645 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.645 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.645 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.645 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cc27187a-8ab8-46b7-b4b4-a6d6642a3226 00:14:54.645 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.645 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.645 [2024-11-19 12:34:59.740832] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:54.645 [2024-11-19 12:34:59.740893] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:54.645 [2024-11-19 12:34:59.740901] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:54.645 [2024-11-19 12:34:59.741156] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:54.645 [2024-11-19 12:34:59.741575] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:54.645 [2024-11-19 12:34:59.741589] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:14:54.645 [2024-11-19 12:34:59.741783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.645 NewBaseBdev 00:14:54.645 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.645 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:54.645 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:54.645 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:54.645 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:54.645 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:54.645 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:54.645 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:54.645 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.645 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.645 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.645 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:54.645 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.645 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.645 [ 00:14:54.645 { 00:14:54.645 "name": "NewBaseBdev", 00:14:54.645 "aliases": [ 00:14:54.645 "cc27187a-8ab8-46b7-b4b4-a6d6642a3226" 00:14:54.645 ], 00:14:54.645 "product_name": "Malloc disk", 00:14:54.645 "block_size": 512, 00:14:54.645 "num_blocks": 65536, 00:14:54.645 "uuid": "cc27187a-8ab8-46b7-b4b4-a6d6642a3226", 00:14:54.645 "assigned_rate_limits": { 00:14:54.645 "rw_ios_per_sec": 0, 00:14:54.645 "rw_mbytes_per_sec": 0, 00:14:54.645 "r_mbytes_per_sec": 0, 00:14:54.645 "w_mbytes_per_sec": 0 00:14:54.645 }, 00:14:54.645 "claimed": true, 00:14:54.645 "claim_type": "exclusive_write", 00:14:54.645 "zoned": false, 00:14:54.646 "supported_io_types": { 00:14:54.646 "read": true, 00:14:54.646 "write": true, 00:14:54.646 "unmap": true, 00:14:54.646 "flush": true, 00:14:54.646 "reset": true, 00:14:54.646 "nvme_admin": false, 00:14:54.646 "nvme_io": false, 00:14:54.646 "nvme_io_md": false, 00:14:54.646 "write_zeroes": true, 00:14:54.646 "zcopy": true, 00:14:54.646 "get_zone_info": false, 00:14:54.646 "zone_management": false, 00:14:54.646 "zone_append": false, 00:14:54.646 "compare": false, 00:14:54.646 "compare_and_write": false, 00:14:54.646 "abort": true, 00:14:54.646 "seek_hole": false, 00:14:54.646 "seek_data": false, 00:14:54.646 "copy": true, 00:14:54.646 "nvme_iov_md": false 00:14:54.646 }, 00:14:54.646 "memory_domains": [ 00:14:54.646 { 00:14:54.646 "dma_device_id": "system", 00:14:54.646 "dma_device_type": 1 00:14:54.646 }, 00:14:54.646 { 00:14:54.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.646 "dma_device_type": 2 00:14:54.646 } 00:14:54.646 ], 00:14:54.646 "driver_specific": {} 00:14:54.646 } 00:14:54.646 ] 00:14:54.646 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.646 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:54.646 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:54.646 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.646 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.646 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.646 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.646 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:54.646 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.646 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.646 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.646 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.646 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.646 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.646 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.646 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.646 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.646 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.646 "name": "Existed_Raid", 00:14:54.646 "uuid": "508555ab-d4de-4e8e-90ba-2fada1f06508", 00:14:54.646 "strip_size_kb": 64, 00:14:54.646 "state": "online", 00:14:54.646 "raid_level": "raid5f", 00:14:54.646 "superblock": false, 00:14:54.646 "num_base_bdevs": 4, 00:14:54.646 "num_base_bdevs_discovered": 4, 00:14:54.646 "num_base_bdevs_operational": 4, 00:14:54.646 "base_bdevs_list": [ 00:14:54.646 { 00:14:54.646 "name": "NewBaseBdev", 00:14:54.646 "uuid": "cc27187a-8ab8-46b7-b4b4-a6d6642a3226", 00:14:54.646 "is_configured": true, 00:14:54.646 "data_offset": 0, 00:14:54.646 "data_size": 65536 00:14:54.646 }, 00:14:54.646 { 00:14:54.646 "name": "BaseBdev2", 00:14:54.646 "uuid": "340a5bb6-f3e4-4fe7-a3e0-887d1eef00df", 00:14:54.646 "is_configured": true, 00:14:54.646 "data_offset": 0, 00:14:54.646 "data_size": 65536 00:14:54.646 }, 00:14:54.646 { 00:14:54.646 "name": "BaseBdev3", 00:14:54.646 "uuid": "8f42b912-f996-44d8-9914-47cded91ddf9", 00:14:54.646 "is_configured": true, 00:14:54.646 "data_offset": 0, 00:14:54.646 "data_size": 65536 00:14:54.646 }, 00:14:54.646 { 00:14:54.646 "name": "BaseBdev4", 00:14:54.646 "uuid": "9f3d8587-9f11-49d7-95f1-e97f558722ae", 00:14:54.646 "is_configured": true, 00:14:54.646 "data_offset": 0, 00:14:54.646 "data_size": 65536 00:14:54.646 } 00:14:54.646 ] 00:14:54.646 }' 00:14:54.646 12:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.646 12:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.215 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:55.215 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:55.215 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:55.215 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:55.215 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:55.215 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:55.215 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:55.215 12:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.215 12:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.215 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:55.215 [2024-11-19 12:35:00.232337] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:55.215 12:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.215 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:55.215 "name": "Existed_Raid", 00:14:55.215 "aliases": [ 00:14:55.215 "508555ab-d4de-4e8e-90ba-2fada1f06508" 00:14:55.215 ], 00:14:55.215 "product_name": "Raid Volume", 00:14:55.215 "block_size": 512, 00:14:55.215 "num_blocks": 196608, 00:14:55.215 "uuid": "508555ab-d4de-4e8e-90ba-2fada1f06508", 00:14:55.215 "assigned_rate_limits": { 00:14:55.215 "rw_ios_per_sec": 0, 00:14:55.215 "rw_mbytes_per_sec": 0, 00:14:55.215 "r_mbytes_per_sec": 0, 00:14:55.215 "w_mbytes_per_sec": 0 00:14:55.215 }, 00:14:55.215 "claimed": false, 00:14:55.215 "zoned": false, 00:14:55.215 "supported_io_types": { 00:14:55.215 "read": true, 00:14:55.215 "write": true, 00:14:55.215 "unmap": false, 00:14:55.215 "flush": false, 00:14:55.215 "reset": true, 00:14:55.215 "nvme_admin": false, 00:14:55.215 "nvme_io": false, 00:14:55.215 "nvme_io_md": false, 00:14:55.215 "write_zeroes": true, 00:14:55.215 "zcopy": false, 00:14:55.215 "get_zone_info": false, 00:14:55.215 "zone_management": false, 00:14:55.215 "zone_append": false, 00:14:55.215 "compare": false, 00:14:55.215 "compare_and_write": false, 00:14:55.215 "abort": false, 00:14:55.215 "seek_hole": false, 00:14:55.215 "seek_data": false, 00:14:55.215 "copy": false, 00:14:55.215 "nvme_iov_md": false 00:14:55.215 }, 00:14:55.215 "driver_specific": { 00:14:55.215 "raid": { 00:14:55.215 "uuid": "508555ab-d4de-4e8e-90ba-2fada1f06508", 00:14:55.215 "strip_size_kb": 64, 00:14:55.215 "state": "online", 00:14:55.215 "raid_level": "raid5f", 00:14:55.215 "superblock": false, 00:14:55.215 "num_base_bdevs": 4, 00:14:55.215 "num_base_bdevs_discovered": 4, 00:14:55.215 "num_base_bdevs_operational": 4, 00:14:55.215 "base_bdevs_list": [ 00:14:55.215 { 00:14:55.215 "name": "NewBaseBdev", 00:14:55.215 "uuid": "cc27187a-8ab8-46b7-b4b4-a6d6642a3226", 00:14:55.215 "is_configured": true, 00:14:55.215 "data_offset": 0, 00:14:55.215 "data_size": 65536 00:14:55.215 }, 00:14:55.215 { 00:14:55.215 "name": "BaseBdev2", 00:14:55.215 "uuid": "340a5bb6-f3e4-4fe7-a3e0-887d1eef00df", 00:14:55.215 "is_configured": true, 00:14:55.215 "data_offset": 0, 00:14:55.215 "data_size": 65536 00:14:55.215 }, 00:14:55.215 { 00:14:55.215 "name": "BaseBdev3", 00:14:55.215 "uuid": "8f42b912-f996-44d8-9914-47cded91ddf9", 00:14:55.215 "is_configured": true, 00:14:55.215 "data_offset": 0, 00:14:55.215 "data_size": 65536 00:14:55.215 }, 00:14:55.215 { 00:14:55.215 "name": "BaseBdev4", 00:14:55.215 "uuid": "9f3d8587-9f11-49d7-95f1-e97f558722ae", 00:14:55.215 "is_configured": true, 00:14:55.215 "data_offset": 0, 00:14:55.215 "data_size": 65536 00:14:55.215 } 00:14:55.215 ] 00:14:55.215 } 00:14:55.215 } 00:14:55.215 }' 00:14:55.215 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:55.215 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:55.215 BaseBdev2 00:14:55.215 BaseBdev3 00:14:55.215 BaseBdev4' 00:14:55.215 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.215 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:55.215 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:55.215 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:55.215 12:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.215 12:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.215 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.215 12:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.215 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.215 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.215 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:55.215 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:55.215 12:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.215 12:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.215 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.215 12:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.475 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.476 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.476 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:55.476 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.476 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:55.476 12:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.476 12:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.476 12:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.476 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.476 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.476 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:55.476 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.476 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:55.476 12:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.476 12:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.476 12:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.476 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.476 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.476 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:55.476 12:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.476 12:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.476 [2024-11-19 12:35:00.591501] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:55.476 [2024-11-19 12:35:00.591546] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:55.476 [2024-11-19 12:35:00.591644] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:55.476 [2024-11-19 12:35:00.591929] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:55.476 [2024-11-19 12:35:00.591942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:14:55.476 12:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.476 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 93470 00:14:55.476 12:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 93470 ']' 00:14:55.476 12:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 93470 00:14:55.476 12:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:14:55.476 12:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:55.476 12:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93470 00:14:55.476 killing process with pid 93470 00:14:55.476 12:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:55.476 12:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:55.476 12:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93470' 00:14:55.476 12:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 93470 00:14:55.476 [2024-11-19 12:35:00.635484] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:55.476 12:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 93470 00:14:55.476 [2024-11-19 12:35:00.676258] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:55.735 12:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:55.735 00:14:55.735 real 0m9.807s 00:14:55.735 user 0m16.580s 00:14:55.735 sys 0m2.303s 00:14:55.735 12:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:55.735 ************************************ 00:14:55.735 END TEST raid5f_state_function_test 00:14:55.735 ************************************ 00:14:55.735 12:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.735 12:35:00 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:14:55.735 12:35:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:55.735 12:35:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:55.735 12:35:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:55.994 ************************************ 00:14:55.994 START TEST raid5f_state_function_test_sb 00:14:55.994 ************************************ 00:14:55.994 12:35:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:14:55.994 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:55.994 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:55.994 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:55.994 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:55.994 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:55.994 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:55.994 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:55.994 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:55.994 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:55.994 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:55.994 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:55.994 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:55.994 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:55.994 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:55.994 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:55.994 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:55.994 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:55.994 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:55.994 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:55.994 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:55.994 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:55.994 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:55.994 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:55.994 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:55.994 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:55.994 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:55.994 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:55.994 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:55.994 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:55.994 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=94119 00:14:55.994 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:55.995 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 94119' 00:14:55.995 Process raid pid: 94119 00:14:55.995 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 94119 00:14:55.995 12:35:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 94119 ']' 00:14:55.995 12:35:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.995 12:35:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:55.995 12:35:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.995 12:35:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:55.995 12:35:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.995 [2024-11-19 12:35:01.105041] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:55.995 [2024-11-19 12:35:01.105265] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.253 [2024-11-19 12:35:01.267265] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.253 [2024-11-19 12:35:01.319813] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.253 [2024-11-19 12:35:01.361586] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:56.253 [2024-11-19 12:35:01.361699] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:56.823 12:35:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:56.823 12:35:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:56.823 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:56.823 12:35:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.823 12:35:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.823 [2024-11-19 12:35:01.995509] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:56.823 [2024-11-19 12:35:01.995579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:56.823 [2024-11-19 12:35:01.995592] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:56.823 [2024-11-19 12:35:01.995602] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:56.823 [2024-11-19 12:35:01.995608] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:56.823 [2024-11-19 12:35:01.995619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:56.823 [2024-11-19 12:35:01.995625] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:56.823 [2024-11-19 12:35:01.995635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:56.823 12:35:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.823 12:35:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:56.823 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.823 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.823 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.823 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.823 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.823 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.823 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.823 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.823 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.823 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.823 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.823 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.823 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.823 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.823 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.823 "name": "Existed_Raid", 00:14:56.823 "uuid": "dc0322bc-3157-4969-83f7-77dce42788a6", 00:14:56.823 "strip_size_kb": 64, 00:14:56.823 "state": "configuring", 00:14:56.823 "raid_level": "raid5f", 00:14:56.823 "superblock": true, 00:14:56.823 "num_base_bdevs": 4, 00:14:56.823 "num_base_bdevs_discovered": 0, 00:14:56.823 "num_base_bdevs_operational": 4, 00:14:56.823 "base_bdevs_list": [ 00:14:56.823 { 00:14:56.823 "name": "BaseBdev1", 00:14:56.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.823 "is_configured": false, 00:14:56.823 "data_offset": 0, 00:14:56.823 "data_size": 0 00:14:56.823 }, 00:14:56.823 { 00:14:56.823 "name": "BaseBdev2", 00:14:56.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.823 "is_configured": false, 00:14:56.823 "data_offset": 0, 00:14:56.823 "data_size": 0 00:14:56.823 }, 00:14:56.823 { 00:14:56.823 "name": "BaseBdev3", 00:14:56.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.823 "is_configured": false, 00:14:56.823 "data_offset": 0, 00:14:56.823 "data_size": 0 00:14:56.823 }, 00:14:56.823 { 00:14:56.823 "name": "BaseBdev4", 00:14:56.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.823 "is_configured": false, 00:14:56.823 "data_offset": 0, 00:14:56.823 "data_size": 0 00:14:56.823 } 00:14:56.823 ] 00:14:56.823 }' 00:14:56.823 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.823 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.393 [2024-11-19 12:35:02.418687] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:57.393 [2024-11-19 12:35:02.418848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.393 [2024-11-19 12:35:02.426726] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:57.393 [2024-11-19 12:35:02.426834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:57.393 [2024-11-19 12:35:02.426864] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:57.393 [2024-11-19 12:35:02.426887] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:57.393 [2024-11-19 12:35:02.426905] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:57.393 [2024-11-19 12:35:02.426947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:57.393 [2024-11-19 12:35:02.427012] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:57.393 [2024-11-19 12:35:02.427045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.393 [2024-11-19 12:35:02.443565] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:57.393 BaseBdev1 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.393 [ 00:14:57.393 { 00:14:57.393 "name": "BaseBdev1", 00:14:57.393 "aliases": [ 00:14:57.393 "0c4a666c-ee1b-4423-b393-d2d486ed366e" 00:14:57.393 ], 00:14:57.393 "product_name": "Malloc disk", 00:14:57.393 "block_size": 512, 00:14:57.393 "num_blocks": 65536, 00:14:57.393 "uuid": "0c4a666c-ee1b-4423-b393-d2d486ed366e", 00:14:57.393 "assigned_rate_limits": { 00:14:57.393 "rw_ios_per_sec": 0, 00:14:57.393 "rw_mbytes_per_sec": 0, 00:14:57.393 "r_mbytes_per_sec": 0, 00:14:57.393 "w_mbytes_per_sec": 0 00:14:57.393 }, 00:14:57.393 "claimed": true, 00:14:57.393 "claim_type": "exclusive_write", 00:14:57.393 "zoned": false, 00:14:57.393 "supported_io_types": { 00:14:57.393 "read": true, 00:14:57.393 "write": true, 00:14:57.393 "unmap": true, 00:14:57.393 "flush": true, 00:14:57.393 "reset": true, 00:14:57.393 "nvme_admin": false, 00:14:57.393 "nvme_io": false, 00:14:57.393 "nvme_io_md": false, 00:14:57.393 "write_zeroes": true, 00:14:57.393 "zcopy": true, 00:14:57.393 "get_zone_info": false, 00:14:57.393 "zone_management": false, 00:14:57.393 "zone_append": false, 00:14:57.393 "compare": false, 00:14:57.393 "compare_and_write": false, 00:14:57.393 "abort": true, 00:14:57.393 "seek_hole": false, 00:14:57.393 "seek_data": false, 00:14:57.393 "copy": true, 00:14:57.393 "nvme_iov_md": false 00:14:57.393 }, 00:14:57.393 "memory_domains": [ 00:14:57.393 { 00:14:57.393 "dma_device_id": "system", 00:14:57.393 "dma_device_type": 1 00:14:57.393 }, 00:14:57.393 { 00:14:57.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.393 "dma_device_type": 2 00:14:57.393 } 00:14:57.393 ], 00:14:57.393 "driver_specific": {} 00:14:57.393 } 00:14:57.393 ] 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.393 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.393 "name": "Existed_Raid", 00:14:57.393 "uuid": "e5cf2a50-913a-493b-969d-5aad0999668e", 00:14:57.393 "strip_size_kb": 64, 00:14:57.394 "state": "configuring", 00:14:57.394 "raid_level": "raid5f", 00:14:57.394 "superblock": true, 00:14:57.394 "num_base_bdevs": 4, 00:14:57.394 "num_base_bdevs_discovered": 1, 00:14:57.394 "num_base_bdevs_operational": 4, 00:14:57.394 "base_bdevs_list": [ 00:14:57.394 { 00:14:57.394 "name": "BaseBdev1", 00:14:57.394 "uuid": "0c4a666c-ee1b-4423-b393-d2d486ed366e", 00:14:57.394 "is_configured": true, 00:14:57.394 "data_offset": 2048, 00:14:57.394 "data_size": 63488 00:14:57.394 }, 00:14:57.394 { 00:14:57.394 "name": "BaseBdev2", 00:14:57.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.394 "is_configured": false, 00:14:57.394 "data_offset": 0, 00:14:57.394 "data_size": 0 00:14:57.394 }, 00:14:57.394 { 00:14:57.394 "name": "BaseBdev3", 00:14:57.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.394 "is_configured": false, 00:14:57.394 "data_offset": 0, 00:14:57.394 "data_size": 0 00:14:57.394 }, 00:14:57.394 { 00:14:57.394 "name": "BaseBdev4", 00:14:57.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.394 "is_configured": false, 00:14:57.394 "data_offset": 0, 00:14:57.394 "data_size": 0 00:14:57.394 } 00:14:57.394 ] 00:14:57.394 }' 00:14:57.394 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.394 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.963 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:57.963 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.963 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.963 [2024-11-19 12:35:02.926889] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:57.963 [2024-11-19 12:35:02.927046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:14:57.963 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.963 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:57.963 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.963 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.963 [2024-11-19 12:35:02.938920] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:57.963 [2024-11-19 12:35:02.940791] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:57.963 [2024-11-19 12:35:02.940836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:57.963 [2024-11-19 12:35:02.940845] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:57.963 [2024-11-19 12:35:02.940854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:57.963 [2024-11-19 12:35:02.940860] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:57.963 [2024-11-19 12:35:02.940868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:57.963 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.963 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:57.963 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:57.963 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:57.963 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.963 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.963 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.963 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.963 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:57.963 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.963 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.963 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.963 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.963 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.963 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.963 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.963 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.963 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.963 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.963 "name": "Existed_Raid", 00:14:57.963 "uuid": "0b60620e-4f48-4d61-8a02-b160d45075ca", 00:14:57.964 "strip_size_kb": 64, 00:14:57.964 "state": "configuring", 00:14:57.964 "raid_level": "raid5f", 00:14:57.964 "superblock": true, 00:14:57.964 "num_base_bdevs": 4, 00:14:57.964 "num_base_bdevs_discovered": 1, 00:14:57.964 "num_base_bdevs_operational": 4, 00:14:57.964 "base_bdevs_list": [ 00:14:57.964 { 00:14:57.964 "name": "BaseBdev1", 00:14:57.964 "uuid": "0c4a666c-ee1b-4423-b393-d2d486ed366e", 00:14:57.964 "is_configured": true, 00:14:57.964 "data_offset": 2048, 00:14:57.964 "data_size": 63488 00:14:57.964 }, 00:14:57.964 { 00:14:57.964 "name": "BaseBdev2", 00:14:57.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.964 "is_configured": false, 00:14:57.964 "data_offset": 0, 00:14:57.964 "data_size": 0 00:14:57.964 }, 00:14:57.964 { 00:14:57.964 "name": "BaseBdev3", 00:14:57.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.964 "is_configured": false, 00:14:57.964 "data_offset": 0, 00:14:57.964 "data_size": 0 00:14:57.964 }, 00:14:57.964 { 00:14:57.964 "name": "BaseBdev4", 00:14:57.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.964 "is_configured": false, 00:14:57.964 "data_offset": 0, 00:14:57.964 "data_size": 0 00:14:57.964 } 00:14:57.964 ] 00:14:57.964 }' 00:14:57.964 12:35:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.964 12:35:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.223 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:58.223 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.223 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.223 [2024-11-19 12:35:03.365217] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:58.223 BaseBdev2 00:14:58.223 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.223 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:58.223 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:58.223 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:58.223 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:58.223 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:58.223 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:58.223 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:58.223 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.223 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.223 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.223 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:58.223 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.223 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.223 [ 00:14:58.223 { 00:14:58.223 "name": "BaseBdev2", 00:14:58.223 "aliases": [ 00:14:58.223 "8342b84e-2ddc-4798-83c2-f41dfb67f0c5" 00:14:58.223 ], 00:14:58.223 "product_name": "Malloc disk", 00:14:58.223 "block_size": 512, 00:14:58.223 "num_blocks": 65536, 00:14:58.223 "uuid": "8342b84e-2ddc-4798-83c2-f41dfb67f0c5", 00:14:58.223 "assigned_rate_limits": { 00:14:58.223 "rw_ios_per_sec": 0, 00:14:58.223 "rw_mbytes_per_sec": 0, 00:14:58.224 "r_mbytes_per_sec": 0, 00:14:58.224 "w_mbytes_per_sec": 0 00:14:58.224 }, 00:14:58.224 "claimed": true, 00:14:58.224 "claim_type": "exclusive_write", 00:14:58.224 "zoned": false, 00:14:58.224 "supported_io_types": { 00:14:58.224 "read": true, 00:14:58.224 "write": true, 00:14:58.224 "unmap": true, 00:14:58.224 "flush": true, 00:14:58.224 "reset": true, 00:14:58.224 "nvme_admin": false, 00:14:58.224 "nvme_io": false, 00:14:58.224 "nvme_io_md": false, 00:14:58.224 "write_zeroes": true, 00:14:58.224 "zcopy": true, 00:14:58.224 "get_zone_info": false, 00:14:58.224 "zone_management": false, 00:14:58.224 "zone_append": false, 00:14:58.224 "compare": false, 00:14:58.224 "compare_and_write": false, 00:14:58.224 "abort": true, 00:14:58.224 "seek_hole": false, 00:14:58.224 "seek_data": false, 00:14:58.224 "copy": true, 00:14:58.224 "nvme_iov_md": false 00:14:58.224 }, 00:14:58.224 "memory_domains": [ 00:14:58.224 { 00:14:58.224 "dma_device_id": "system", 00:14:58.224 "dma_device_type": 1 00:14:58.224 }, 00:14:58.224 { 00:14:58.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.224 "dma_device_type": 2 00:14:58.224 } 00:14:58.224 ], 00:14:58.224 "driver_specific": {} 00:14:58.224 } 00:14:58.224 ] 00:14:58.224 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.224 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:58.224 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:58.224 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:58.224 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:58.224 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.224 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.224 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.224 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.224 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:58.224 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.224 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.224 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.224 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.224 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.224 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.224 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.224 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.224 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.224 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.224 "name": "Existed_Raid", 00:14:58.224 "uuid": "0b60620e-4f48-4d61-8a02-b160d45075ca", 00:14:58.224 "strip_size_kb": 64, 00:14:58.224 "state": "configuring", 00:14:58.224 "raid_level": "raid5f", 00:14:58.224 "superblock": true, 00:14:58.224 "num_base_bdevs": 4, 00:14:58.224 "num_base_bdevs_discovered": 2, 00:14:58.224 "num_base_bdevs_operational": 4, 00:14:58.224 "base_bdevs_list": [ 00:14:58.224 { 00:14:58.224 "name": "BaseBdev1", 00:14:58.224 "uuid": "0c4a666c-ee1b-4423-b393-d2d486ed366e", 00:14:58.224 "is_configured": true, 00:14:58.224 "data_offset": 2048, 00:14:58.224 "data_size": 63488 00:14:58.224 }, 00:14:58.224 { 00:14:58.224 "name": "BaseBdev2", 00:14:58.224 "uuid": "8342b84e-2ddc-4798-83c2-f41dfb67f0c5", 00:14:58.224 "is_configured": true, 00:14:58.224 "data_offset": 2048, 00:14:58.224 "data_size": 63488 00:14:58.224 }, 00:14:58.224 { 00:14:58.224 "name": "BaseBdev3", 00:14:58.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.224 "is_configured": false, 00:14:58.224 "data_offset": 0, 00:14:58.224 "data_size": 0 00:14:58.224 }, 00:14:58.224 { 00:14:58.224 "name": "BaseBdev4", 00:14:58.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.224 "is_configured": false, 00:14:58.224 "data_offset": 0, 00:14:58.224 "data_size": 0 00:14:58.224 } 00:14:58.224 ] 00:14:58.224 }' 00:14:58.224 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.224 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.793 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:58.793 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.793 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.793 [2024-11-19 12:35:03.851443] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:58.793 BaseBdev3 00:14:58.793 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.793 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:58.793 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:58.793 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:58.793 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:58.793 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:58.793 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:58.793 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:58.793 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.793 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.793 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.793 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:58.793 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.793 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.793 [ 00:14:58.793 { 00:14:58.793 "name": "BaseBdev3", 00:14:58.793 "aliases": [ 00:14:58.793 "b999fefc-5cd6-4ccc-a1bb-a48bcf7b79ad" 00:14:58.793 ], 00:14:58.793 "product_name": "Malloc disk", 00:14:58.793 "block_size": 512, 00:14:58.793 "num_blocks": 65536, 00:14:58.793 "uuid": "b999fefc-5cd6-4ccc-a1bb-a48bcf7b79ad", 00:14:58.793 "assigned_rate_limits": { 00:14:58.793 "rw_ios_per_sec": 0, 00:14:58.793 "rw_mbytes_per_sec": 0, 00:14:58.793 "r_mbytes_per_sec": 0, 00:14:58.793 "w_mbytes_per_sec": 0 00:14:58.793 }, 00:14:58.793 "claimed": true, 00:14:58.793 "claim_type": "exclusive_write", 00:14:58.793 "zoned": false, 00:14:58.793 "supported_io_types": { 00:14:58.793 "read": true, 00:14:58.793 "write": true, 00:14:58.793 "unmap": true, 00:14:58.793 "flush": true, 00:14:58.793 "reset": true, 00:14:58.793 "nvme_admin": false, 00:14:58.793 "nvme_io": false, 00:14:58.793 "nvme_io_md": false, 00:14:58.793 "write_zeroes": true, 00:14:58.793 "zcopy": true, 00:14:58.793 "get_zone_info": false, 00:14:58.793 "zone_management": false, 00:14:58.793 "zone_append": false, 00:14:58.793 "compare": false, 00:14:58.793 "compare_and_write": false, 00:14:58.793 "abort": true, 00:14:58.793 "seek_hole": false, 00:14:58.793 "seek_data": false, 00:14:58.793 "copy": true, 00:14:58.793 "nvme_iov_md": false 00:14:58.793 }, 00:14:58.793 "memory_domains": [ 00:14:58.793 { 00:14:58.793 "dma_device_id": "system", 00:14:58.793 "dma_device_type": 1 00:14:58.793 }, 00:14:58.793 { 00:14:58.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.793 "dma_device_type": 2 00:14:58.793 } 00:14:58.793 ], 00:14:58.793 "driver_specific": {} 00:14:58.793 } 00:14:58.793 ] 00:14:58.793 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.793 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:58.793 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:58.793 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:58.793 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:58.793 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.793 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.794 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.794 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.794 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:58.794 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.794 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.794 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.794 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.794 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.794 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.794 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.794 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.794 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.794 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.794 "name": "Existed_Raid", 00:14:58.794 "uuid": "0b60620e-4f48-4d61-8a02-b160d45075ca", 00:14:58.794 "strip_size_kb": 64, 00:14:58.794 "state": "configuring", 00:14:58.794 "raid_level": "raid5f", 00:14:58.794 "superblock": true, 00:14:58.794 "num_base_bdevs": 4, 00:14:58.794 "num_base_bdevs_discovered": 3, 00:14:58.794 "num_base_bdevs_operational": 4, 00:14:58.794 "base_bdevs_list": [ 00:14:58.794 { 00:14:58.794 "name": "BaseBdev1", 00:14:58.794 "uuid": "0c4a666c-ee1b-4423-b393-d2d486ed366e", 00:14:58.794 "is_configured": true, 00:14:58.794 "data_offset": 2048, 00:14:58.794 "data_size": 63488 00:14:58.794 }, 00:14:58.794 { 00:14:58.794 "name": "BaseBdev2", 00:14:58.794 "uuid": "8342b84e-2ddc-4798-83c2-f41dfb67f0c5", 00:14:58.794 "is_configured": true, 00:14:58.794 "data_offset": 2048, 00:14:58.794 "data_size": 63488 00:14:58.794 }, 00:14:58.794 { 00:14:58.794 "name": "BaseBdev3", 00:14:58.794 "uuid": "b999fefc-5cd6-4ccc-a1bb-a48bcf7b79ad", 00:14:58.794 "is_configured": true, 00:14:58.794 "data_offset": 2048, 00:14:58.794 "data_size": 63488 00:14:58.794 }, 00:14:58.794 { 00:14:58.794 "name": "BaseBdev4", 00:14:58.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.794 "is_configured": false, 00:14:58.794 "data_offset": 0, 00:14:58.794 "data_size": 0 00:14:58.794 } 00:14:58.794 ] 00:14:58.794 }' 00:14:58.794 12:35:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.794 12:35:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.052 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:59.052 12:35:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.052 12:35:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.312 [2024-11-19 12:35:04.317815] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:59.312 BaseBdev4 00:14:59.312 [2024-11-19 12:35:04.318143] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:14:59.312 [2024-11-19 12:35:04.318162] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:59.312 [2024-11-19 12:35:04.318430] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:59.312 [2024-11-19 12:35:04.318913] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:14:59.312 [2024-11-19 12:35:04.318928] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:14:59.312 [2024-11-19 12:35:04.319045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.312 [ 00:14:59.312 { 00:14:59.312 "name": "BaseBdev4", 00:14:59.312 "aliases": [ 00:14:59.312 "27f61afc-441a-4873-9943-bd9521760722" 00:14:59.312 ], 00:14:59.312 "product_name": "Malloc disk", 00:14:59.312 "block_size": 512, 00:14:59.312 "num_blocks": 65536, 00:14:59.312 "uuid": "27f61afc-441a-4873-9943-bd9521760722", 00:14:59.312 "assigned_rate_limits": { 00:14:59.312 "rw_ios_per_sec": 0, 00:14:59.312 "rw_mbytes_per_sec": 0, 00:14:59.312 "r_mbytes_per_sec": 0, 00:14:59.312 "w_mbytes_per_sec": 0 00:14:59.312 }, 00:14:59.312 "claimed": true, 00:14:59.312 "claim_type": "exclusive_write", 00:14:59.312 "zoned": false, 00:14:59.312 "supported_io_types": { 00:14:59.312 "read": true, 00:14:59.312 "write": true, 00:14:59.312 "unmap": true, 00:14:59.312 "flush": true, 00:14:59.312 "reset": true, 00:14:59.312 "nvme_admin": false, 00:14:59.312 "nvme_io": false, 00:14:59.312 "nvme_io_md": false, 00:14:59.312 "write_zeroes": true, 00:14:59.312 "zcopy": true, 00:14:59.312 "get_zone_info": false, 00:14:59.312 "zone_management": false, 00:14:59.312 "zone_append": false, 00:14:59.312 "compare": false, 00:14:59.312 "compare_and_write": false, 00:14:59.312 "abort": true, 00:14:59.312 "seek_hole": false, 00:14:59.312 "seek_data": false, 00:14:59.312 "copy": true, 00:14:59.312 "nvme_iov_md": false 00:14:59.312 }, 00:14:59.312 "memory_domains": [ 00:14:59.312 { 00:14:59.312 "dma_device_id": "system", 00:14:59.312 "dma_device_type": 1 00:14:59.312 }, 00:14:59.312 { 00:14:59.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.312 "dma_device_type": 2 00:14:59.312 } 00:14:59.312 ], 00:14:59.312 "driver_specific": {} 00:14:59.312 } 00:14:59.312 ] 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.312 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.312 "name": "Existed_Raid", 00:14:59.312 "uuid": "0b60620e-4f48-4d61-8a02-b160d45075ca", 00:14:59.312 "strip_size_kb": 64, 00:14:59.312 "state": "online", 00:14:59.312 "raid_level": "raid5f", 00:14:59.312 "superblock": true, 00:14:59.312 "num_base_bdevs": 4, 00:14:59.312 "num_base_bdevs_discovered": 4, 00:14:59.312 "num_base_bdevs_operational": 4, 00:14:59.312 "base_bdevs_list": [ 00:14:59.312 { 00:14:59.312 "name": "BaseBdev1", 00:14:59.312 "uuid": "0c4a666c-ee1b-4423-b393-d2d486ed366e", 00:14:59.312 "is_configured": true, 00:14:59.312 "data_offset": 2048, 00:14:59.312 "data_size": 63488 00:14:59.312 }, 00:14:59.312 { 00:14:59.312 "name": "BaseBdev2", 00:14:59.312 "uuid": "8342b84e-2ddc-4798-83c2-f41dfb67f0c5", 00:14:59.312 "is_configured": true, 00:14:59.312 "data_offset": 2048, 00:14:59.313 "data_size": 63488 00:14:59.313 }, 00:14:59.313 { 00:14:59.313 "name": "BaseBdev3", 00:14:59.313 "uuid": "b999fefc-5cd6-4ccc-a1bb-a48bcf7b79ad", 00:14:59.313 "is_configured": true, 00:14:59.313 "data_offset": 2048, 00:14:59.313 "data_size": 63488 00:14:59.313 }, 00:14:59.313 { 00:14:59.313 "name": "BaseBdev4", 00:14:59.313 "uuid": "27f61afc-441a-4873-9943-bd9521760722", 00:14:59.313 "is_configured": true, 00:14:59.313 "data_offset": 2048, 00:14:59.313 "data_size": 63488 00:14:59.313 } 00:14:59.313 ] 00:14:59.313 }' 00:14:59.313 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.313 12:35:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.572 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:59.572 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:59.572 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:59.572 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:59.572 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:59.572 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:59.572 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:59.572 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:59.572 12:35:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.572 12:35:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.572 [2024-11-19 12:35:04.829244] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:59.831 12:35:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.831 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:59.831 "name": "Existed_Raid", 00:14:59.831 "aliases": [ 00:14:59.831 "0b60620e-4f48-4d61-8a02-b160d45075ca" 00:14:59.831 ], 00:14:59.831 "product_name": "Raid Volume", 00:14:59.831 "block_size": 512, 00:14:59.831 "num_blocks": 190464, 00:14:59.831 "uuid": "0b60620e-4f48-4d61-8a02-b160d45075ca", 00:14:59.831 "assigned_rate_limits": { 00:14:59.831 "rw_ios_per_sec": 0, 00:14:59.831 "rw_mbytes_per_sec": 0, 00:14:59.831 "r_mbytes_per_sec": 0, 00:14:59.831 "w_mbytes_per_sec": 0 00:14:59.831 }, 00:14:59.831 "claimed": false, 00:14:59.831 "zoned": false, 00:14:59.831 "supported_io_types": { 00:14:59.831 "read": true, 00:14:59.831 "write": true, 00:14:59.831 "unmap": false, 00:14:59.831 "flush": false, 00:14:59.831 "reset": true, 00:14:59.831 "nvme_admin": false, 00:14:59.831 "nvme_io": false, 00:14:59.831 "nvme_io_md": false, 00:14:59.831 "write_zeroes": true, 00:14:59.831 "zcopy": false, 00:14:59.831 "get_zone_info": false, 00:14:59.831 "zone_management": false, 00:14:59.831 "zone_append": false, 00:14:59.831 "compare": false, 00:14:59.831 "compare_and_write": false, 00:14:59.831 "abort": false, 00:14:59.831 "seek_hole": false, 00:14:59.831 "seek_data": false, 00:14:59.831 "copy": false, 00:14:59.831 "nvme_iov_md": false 00:14:59.831 }, 00:14:59.832 "driver_specific": { 00:14:59.832 "raid": { 00:14:59.832 "uuid": "0b60620e-4f48-4d61-8a02-b160d45075ca", 00:14:59.832 "strip_size_kb": 64, 00:14:59.832 "state": "online", 00:14:59.832 "raid_level": "raid5f", 00:14:59.832 "superblock": true, 00:14:59.832 "num_base_bdevs": 4, 00:14:59.832 "num_base_bdevs_discovered": 4, 00:14:59.832 "num_base_bdevs_operational": 4, 00:14:59.832 "base_bdevs_list": [ 00:14:59.832 { 00:14:59.832 "name": "BaseBdev1", 00:14:59.832 "uuid": "0c4a666c-ee1b-4423-b393-d2d486ed366e", 00:14:59.832 "is_configured": true, 00:14:59.832 "data_offset": 2048, 00:14:59.832 "data_size": 63488 00:14:59.832 }, 00:14:59.832 { 00:14:59.832 "name": "BaseBdev2", 00:14:59.832 "uuid": "8342b84e-2ddc-4798-83c2-f41dfb67f0c5", 00:14:59.832 "is_configured": true, 00:14:59.832 "data_offset": 2048, 00:14:59.832 "data_size": 63488 00:14:59.832 }, 00:14:59.832 { 00:14:59.832 "name": "BaseBdev3", 00:14:59.832 "uuid": "b999fefc-5cd6-4ccc-a1bb-a48bcf7b79ad", 00:14:59.832 "is_configured": true, 00:14:59.832 "data_offset": 2048, 00:14:59.832 "data_size": 63488 00:14:59.832 }, 00:14:59.832 { 00:14:59.832 "name": "BaseBdev4", 00:14:59.832 "uuid": "27f61afc-441a-4873-9943-bd9521760722", 00:14:59.832 "is_configured": true, 00:14:59.832 "data_offset": 2048, 00:14:59.832 "data_size": 63488 00:14:59.832 } 00:14:59.832 ] 00:14:59.832 } 00:14:59.832 } 00:14:59.832 }' 00:14:59.832 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:59.832 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:59.832 BaseBdev2 00:14:59.832 BaseBdev3 00:14:59.832 BaseBdev4' 00:14:59.832 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.832 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:59.832 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.832 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:59.832 12:35:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.832 12:35:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.832 12:35:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.832 12:35:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.832 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.832 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.832 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.832 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.832 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:59.832 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.832 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.832 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.832 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.832 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.832 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.832 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.832 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:59.832 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.832 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.832 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.091 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:00.091 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:00.091 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:00.091 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:00.091 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:00.091 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.091 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.091 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.091 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:00.091 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:00.091 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:00.091 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.091 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.091 [2024-11-19 12:35:05.140613] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:00.091 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.091 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:00.091 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:00.091 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:00.091 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:00.091 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:00.091 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:00.091 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.091 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.091 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.091 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.091 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.092 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.092 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.092 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.092 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.092 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.092 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.092 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.092 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.092 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.092 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.092 "name": "Existed_Raid", 00:15:00.092 "uuid": "0b60620e-4f48-4d61-8a02-b160d45075ca", 00:15:00.092 "strip_size_kb": 64, 00:15:00.092 "state": "online", 00:15:00.092 "raid_level": "raid5f", 00:15:00.092 "superblock": true, 00:15:00.092 "num_base_bdevs": 4, 00:15:00.092 "num_base_bdevs_discovered": 3, 00:15:00.092 "num_base_bdevs_operational": 3, 00:15:00.092 "base_bdevs_list": [ 00:15:00.092 { 00:15:00.092 "name": null, 00:15:00.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.092 "is_configured": false, 00:15:00.092 "data_offset": 0, 00:15:00.092 "data_size": 63488 00:15:00.092 }, 00:15:00.092 { 00:15:00.092 "name": "BaseBdev2", 00:15:00.092 "uuid": "8342b84e-2ddc-4798-83c2-f41dfb67f0c5", 00:15:00.092 "is_configured": true, 00:15:00.092 "data_offset": 2048, 00:15:00.092 "data_size": 63488 00:15:00.092 }, 00:15:00.092 { 00:15:00.092 "name": "BaseBdev3", 00:15:00.092 "uuid": "b999fefc-5cd6-4ccc-a1bb-a48bcf7b79ad", 00:15:00.092 "is_configured": true, 00:15:00.092 "data_offset": 2048, 00:15:00.092 "data_size": 63488 00:15:00.092 }, 00:15:00.092 { 00:15:00.092 "name": "BaseBdev4", 00:15:00.092 "uuid": "27f61afc-441a-4873-9943-bd9521760722", 00:15:00.092 "is_configured": true, 00:15:00.092 "data_offset": 2048, 00:15:00.092 "data_size": 63488 00:15:00.092 } 00:15:00.092 ] 00:15:00.092 }' 00:15:00.092 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.092 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.351 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:00.351 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:00.351 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.351 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.351 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.351 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:00.351 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.611 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:00.611 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:00.611 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:00.611 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.611 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.611 [2024-11-19 12:35:05.639184] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:00.611 [2024-11-19 12:35:05.639441] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:00.611 [2024-11-19 12:35:05.650447] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:00.611 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.611 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:00.611 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:00.611 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.611 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.611 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.611 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:00.611 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.611 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:00.611 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:00.611 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:00.611 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.611 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.611 [2024-11-19 12:35:05.710410] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:00.611 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.611 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:00.611 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:00.611 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.611 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.611 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.611 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:00.611 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.612 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:00.612 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:00.612 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:00.612 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.612 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.612 [2024-11-19 12:35:05.781358] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:00.612 [2024-11-19 12:35:05.781422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:15:00.612 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.612 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:00.612 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:00.612 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.612 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.612 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.612 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:00.612 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.612 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:00.612 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:00.612 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:00.612 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:00.612 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:00.612 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:00.612 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.612 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.612 BaseBdev2 00:15:00.612 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.612 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:00.612 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:00.612 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:00.612 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:00.612 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:00.612 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:00.612 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:00.612 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.612 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.872 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.872 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:00.872 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.872 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.872 [ 00:15:00.872 { 00:15:00.872 "name": "BaseBdev2", 00:15:00.872 "aliases": [ 00:15:00.872 "6b2bcea8-b732-413a-954d-6b79f926c192" 00:15:00.872 ], 00:15:00.872 "product_name": "Malloc disk", 00:15:00.872 "block_size": 512, 00:15:00.872 "num_blocks": 65536, 00:15:00.872 "uuid": "6b2bcea8-b732-413a-954d-6b79f926c192", 00:15:00.872 "assigned_rate_limits": { 00:15:00.872 "rw_ios_per_sec": 0, 00:15:00.872 "rw_mbytes_per_sec": 0, 00:15:00.872 "r_mbytes_per_sec": 0, 00:15:00.872 "w_mbytes_per_sec": 0 00:15:00.872 }, 00:15:00.872 "claimed": false, 00:15:00.872 "zoned": false, 00:15:00.873 "supported_io_types": { 00:15:00.873 "read": true, 00:15:00.873 "write": true, 00:15:00.873 "unmap": true, 00:15:00.873 "flush": true, 00:15:00.873 "reset": true, 00:15:00.873 "nvme_admin": false, 00:15:00.873 "nvme_io": false, 00:15:00.873 "nvme_io_md": false, 00:15:00.873 "write_zeroes": true, 00:15:00.873 "zcopy": true, 00:15:00.873 "get_zone_info": false, 00:15:00.873 "zone_management": false, 00:15:00.873 "zone_append": false, 00:15:00.873 "compare": false, 00:15:00.873 "compare_and_write": false, 00:15:00.873 "abort": true, 00:15:00.873 "seek_hole": false, 00:15:00.873 "seek_data": false, 00:15:00.873 "copy": true, 00:15:00.873 "nvme_iov_md": false 00:15:00.873 }, 00:15:00.873 "memory_domains": [ 00:15:00.873 { 00:15:00.873 "dma_device_id": "system", 00:15:00.873 "dma_device_type": 1 00:15:00.873 }, 00:15:00.873 { 00:15:00.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.873 "dma_device_type": 2 00:15:00.873 } 00:15:00.873 ], 00:15:00.873 "driver_specific": {} 00:15:00.873 } 00:15:00.873 ] 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.873 BaseBdev3 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.873 [ 00:15:00.873 { 00:15:00.873 "name": "BaseBdev3", 00:15:00.873 "aliases": [ 00:15:00.873 "6a2c1d74-a5ca-4b29-bbb3-c3f75aa3cd92" 00:15:00.873 ], 00:15:00.873 "product_name": "Malloc disk", 00:15:00.873 "block_size": 512, 00:15:00.873 "num_blocks": 65536, 00:15:00.873 "uuid": "6a2c1d74-a5ca-4b29-bbb3-c3f75aa3cd92", 00:15:00.873 "assigned_rate_limits": { 00:15:00.873 "rw_ios_per_sec": 0, 00:15:00.873 "rw_mbytes_per_sec": 0, 00:15:00.873 "r_mbytes_per_sec": 0, 00:15:00.873 "w_mbytes_per_sec": 0 00:15:00.873 }, 00:15:00.873 "claimed": false, 00:15:00.873 "zoned": false, 00:15:00.873 "supported_io_types": { 00:15:00.873 "read": true, 00:15:00.873 "write": true, 00:15:00.873 "unmap": true, 00:15:00.873 "flush": true, 00:15:00.873 "reset": true, 00:15:00.873 "nvme_admin": false, 00:15:00.873 "nvme_io": false, 00:15:00.873 "nvme_io_md": false, 00:15:00.873 "write_zeroes": true, 00:15:00.873 "zcopy": true, 00:15:00.873 "get_zone_info": false, 00:15:00.873 "zone_management": false, 00:15:00.873 "zone_append": false, 00:15:00.873 "compare": false, 00:15:00.873 "compare_and_write": false, 00:15:00.873 "abort": true, 00:15:00.873 "seek_hole": false, 00:15:00.873 "seek_data": false, 00:15:00.873 "copy": true, 00:15:00.873 "nvme_iov_md": false 00:15:00.873 }, 00:15:00.873 "memory_domains": [ 00:15:00.873 { 00:15:00.873 "dma_device_id": "system", 00:15:00.873 "dma_device_type": 1 00:15:00.873 }, 00:15:00.873 { 00:15:00.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.873 "dma_device_type": 2 00:15:00.873 } 00:15:00.873 ], 00:15:00.873 "driver_specific": {} 00:15:00.873 } 00:15:00.873 ] 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.873 BaseBdev4 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.873 12:35:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.873 [ 00:15:00.873 { 00:15:00.873 "name": "BaseBdev4", 00:15:00.873 "aliases": [ 00:15:00.873 "c94ab0b7-28b3-4c97-9f81-b35b5585a8b8" 00:15:00.873 ], 00:15:00.873 "product_name": "Malloc disk", 00:15:00.873 "block_size": 512, 00:15:00.873 "num_blocks": 65536, 00:15:00.873 "uuid": "c94ab0b7-28b3-4c97-9f81-b35b5585a8b8", 00:15:00.873 "assigned_rate_limits": { 00:15:00.873 "rw_ios_per_sec": 0, 00:15:00.873 "rw_mbytes_per_sec": 0, 00:15:00.873 "r_mbytes_per_sec": 0, 00:15:00.873 "w_mbytes_per_sec": 0 00:15:00.873 }, 00:15:00.873 "claimed": false, 00:15:00.873 "zoned": false, 00:15:00.873 "supported_io_types": { 00:15:00.873 "read": true, 00:15:00.873 "write": true, 00:15:00.873 "unmap": true, 00:15:00.873 "flush": true, 00:15:00.873 "reset": true, 00:15:00.873 "nvme_admin": false, 00:15:00.873 "nvme_io": false, 00:15:00.873 "nvme_io_md": false, 00:15:00.873 "write_zeroes": true, 00:15:00.873 "zcopy": true, 00:15:00.873 "get_zone_info": false, 00:15:00.873 "zone_management": false, 00:15:00.873 "zone_append": false, 00:15:00.873 "compare": false, 00:15:00.873 "compare_and_write": false, 00:15:00.873 "abort": true, 00:15:00.873 "seek_hole": false, 00:15:00.873 "seek_data": false, 00:15:00.873 "copy": true, 00:15:00.873 "nvme_iov_md": false 00:15:00.873 }, 00:15:00.873 "memory_domains": [ 00:15:00.873 { 00:15:00.873 "dma_device_id": "system", 00:15:00.873 "dma_device_type": 1 00:15:00.873 }, 00:15:00.873 { 00:15:00.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.873 "dma_device_type": 2 00:15:00.873 } 00:15:00.873 ], 00:15:00.873 "driver_specific": {} 00:15:00.873 } 00:15:00.873 ] 00:15:00.873 12:35:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.873 12:35:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:00.873 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:00.873 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:00.873 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:00.873 12:35:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.873 12:35:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.873 [2024-11-19 12:35:06.014718] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:00.873 [2024-11-19 12:35:06.014790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:00.873 [2024-11-19 12:35:06.014816] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:00.874 [2024-11-19 12:35:06.016693] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:00.874 [2024-11-19 12:35:06.016758] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:00.874 12:35:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.874 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:00.874 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.874 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.874 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.874 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.874 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:00.874 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.874 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.874 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.874 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.874 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.874 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.874 12:35:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.874 12:35:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.874 12:35:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.874 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.874 "name": "Existed_Raid", 00:15:00.874 "uuid": "d8d58eaf-4b19-4ae2-a007-a9e954cafdf8", 00:15:00.874 "strip_size_kb": 64, 00:15:00.874 "state": "configuring", 00:15:00.874 "raid_level": "raid5f", 00:15:00.874 "superblock": true, 00:15:00.874 "num_base_bdevs": 4, 00:15:00.874 "num_base_bdevs_discovered": 3, 00:15:00.874 "num_base_bdevs_operational": 4, 00:15:00.874 "base_bdevs_list": [ 00:15:00.874 { 00:15:00.874 "name": "BaseBdev1", 00:15:00.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.874 "is_configured": false, 00:15:00.874 "data_offset": 0, 00:15:00.874 "data_size": 0 00:15:00.874 }, 00:15:00.874 { 00:15:00.874 "name": "BaseBdev2", 00:15:00.874 "uuid": "6b2bcea8-b732-413a-954d-6b79f926c192", 00:15:00.874 "is_configured": true, 00:15:00.874 "data_offset": 2048, 00:15:00.874 "data_size": 63488 00:15:00.874 }, 00:15:00.874 { 00:15:00.874 "name": "BaseBdev3", 00:15:00.874 "uuid": "6a2c1d74-a5ca-4b29-bbb3-c3f75aa3cd92", 00:15:00.874 "is_configured": true, 00:15:00.874 "data_offset": 2048, 00:15:00.874 "data_size": 63488 00:15:00.874 }, 00:15:00.874 { 00:15:00.874 "name": "BaseBdev4", 00:15:00.874 "uuid": "c94ab0b7-28b3-4c97-9f81-b35b5585a8b8", 00:15:00.874 "is_configured": true, 00:15:00.874 "data_offset": 2048, 00:15:00.874 "data_size": 63488 00:15:00.874 } 00:15:00.874 ] 00:15:00.874 }' 00:15:00.874 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.874 12:35:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.443 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:01.443 12:35:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.443 12:35:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.443 [2024-11-19 12:35:06.477882] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:01.443 12:35:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.443 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:01.443 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.443 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.443 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.443 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.443 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:01.443 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.443 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.443 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.443 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.443 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.443 12:35:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.443 12:35:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.443 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.443 12:35:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.443 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.443 "name": "Existed_Raid", 00:15:01.443 "uuid": "d8d58eaf-4b19-4ae2-a007-a9e954cafdf8", 00:15:01.443 "strip_size_kb": 64, 00:15:01.443 "state": "configuring", 00:15:01.443 "raid_level": "raid5f", 00:15:01.443 "superblock": true, 00:15:01.443 "num_base_bdevs": 4, 00:15:01.443 "num_base_bdevs_discovered": 2, 00:15:01.443 "num_base_bdevs_operational": 4, 00:15:01.443 "base_bdevs_list": [ 00:15:01.443 { 00:15:01.443 "name": "BaseBdev1", 00:15:01.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.443 "is_configured": false, 00:15:01.443 "data_offset": 0, 00:15:01.443 "data_size": 0 00:15:01.443 }, 00:15:01.443 { 00:15:01.443 "name": null, 00:15:01.443 "uuid": "6b2bcea8-b732-413a-954d-6b79f926c192", 00:15:01.443 "is_configured": false, 00:15:01.443 "data_offset": 0, 00:15:01.443 "data_size": 63488 00:15:01.443 }, 00:15:01.443 { 00:15:01.443 "name": "BaseBdev3", 00:15:01.443 "uuid": "6a2c1d74-a5ca-4b29-bbb3-c3f75aa3cd92", 00:15:01.443 "is_configured": true, 00:15:01.443 "data_offset": 2048, 00:15:01.443 "data_size": 63488 00:15:01.443 }, 00:15:01.443 { 00:15:01.443 "name": "BaseBdev4", 00:15:01.443 "uuid": "c94ab0b7-28b3-4c97-9f81-b35b5585a8b8", 00:15:01.443 "is_configured": true, 00:15:01.443 "data_offset": 2048, 00:15:01.443 "data_size": 63488 00:15:01.443 } 00:15:01.443 ] 00:15:01.443 }' 00:15:01.443 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.443 12:35:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.702 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:01.702 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.702 12:35:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.702 12:35:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.962 12:35:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.962 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:01.962 12:35:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:01.962 12:35:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.962 12:35:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.962 [2024-11-19 12:35:07.011945] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:01.962 BaseBdev1 00:15:01.962 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.962 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:01.962 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:01.962 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:01.962 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:01.962 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:01.962 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:01.962 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:01.962 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.962 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.962 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.962 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:01.962 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.962 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.962 [ 00:15:01.962 { 00:15:01.962 "name": "BaseBdev1", 00:15:01.962 "aliases": [ 00:15:01.962 "3c54f8fd-6682-4ea9-bc65-cd6a081dce35" 00:15:01.962 ], 00:15:01.962 "product_name": "Malloc disk", 00:15:01.962 "block_size": 512, 00:15:01.962 "num_blocks": 65536, 00:15:01.962 "uuid": "3c54f8fd-6682-4ea9-bc65-cd6a081dce35", 00:15:01.962 "assigned_rate_limits": { 00:15:01.962 "rw_ios_per_sec": 0, 00:15:01.962 "rw_mbytes_per_sec": 0, 00:15:01.962 "r_mbytes_per_sec": 0, 00:15:01.962 "w_mbytes_per_sec": 0 00:15:01.962 }, 00:15:01.962 "claimed": true, 00:15:01.962 "claim_type": "exclusive_write", 00:15:01.962 "zoned": false, 00:15:01.962 "supported_io_types": { 00:15:01.962 "read": true, 00:15:01.962 "write": true, 00:15:01.962 "unmap": true, 00:15:01.962 "flush": true, 00:15:01.962 "reset": true, 00:15:01.962 "nvme_admin": false, 00:15:01.962 "nvme_io": false, 00:15:01.962 "nvme_io_md": false, 00:15:01.962 "write_zeroes": true, 00:15:01.962 "zcopy": true, 00:15:01.962 "get_zone_info": false, 00:15:01.962 "zone_management": false, 00:15:01.962 "zone_append": false, 00:15:01.962 "compare": false, 00:15:01.962 "compare_and_write": false, 00:15:01.962 "abort": true, 00:15:01.962 "seek_hole": false, 00:15:01.962 "seek_data": false, 00:15:01.962 "copy": true, 00:15:01.962 "nvme_iov_md": false 00:15:01.962 }, 00:15:01.962 "memory_domains": [ 00:15:01.962 { 00:15:01.962 "dma_device_id": "system", 00:15:01.962 "dma_device_type": 1 00:15:01.962 }, 00:15:01.962 { 00:15:01.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.962 "dma_device_type": 2 00:15:01.962 } 00:15:01.962 ], 00:15:01.962 "driver_specific": {} 00:15:01.962 } 00:15:01.962 ] 00:15:01.962 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.962 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:01.962 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:01.962 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.962 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.962 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.962 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.962 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:01.962 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.962 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.962 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.962 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.962 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.962 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.962 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.962 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.962 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.962 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.962 "name": "Existed_Raid", 00:15:01.962 "uuid": "d8d58eaf-4b19-4ae2-a007-a9e954cafdf8", 00:15:01.962 "strip_size_kb": 64, 00:15:01.962 "state": "configuring", 00:15:01.962 "raid_level": "raid5f", 00:15:01.962 "superblock": true, 00:15:01.962 "num_base_bdevs": 4, 00:15:01.962 "num_base_bdevs_discovered": 3, 00:15:01.962 "num_base_bdevs_operational": 4, 00:15:01.962 "base_bdevs_list": [ 00:15:01.962 { 00:15:01.962 "name": "BaseBdev1", 00:15:01.962 "uuid": "3c54f8fd-6682-4ea9-bc65-cd6a081dce35", 00:15:01.962 "is_configured": true, 00:15:01.962 "data_offset": 2048, 00:15:01.962 "data_size": 63488 00:15:01.962 }, 00:15:01.962 { 00:15:01.962 "name": null, 00:15:01.962 "uuid": "6b2bcea8-b732-413a-954d-6b79f926c192", 00:15:01.962 "is_configured": false, 00:15:01.962 "data_offset": 0, 00:15:01.962 "data_size": 63488 00:15:01.962 }, 00:15:01.962 { 00:15:01.962 "name": "BaseBdev3", 00:15:01.962 "uuid": "6a2c1d74-a5ca-4b29-bbb3-c3f75aa3cd92", 00:15:01.962 "is_configured": true, 00:15:01.962 "data_offset": 2048, 00:15:01.962 "data_size": 63488 00:15:01.962 }, 00:15:01.962 { 00:15:01.962 "name": "BaseBdev4", 00:15:01.962 "uuid": "c94ab0b7-28b3-4c97-9f81-b35b5585a8b8", 00:15:01.962 "is_configured": true, 00:15:01.962 "data_offset": 2048, 00:15:01.962 "data_size": 63488 00:15:01.962 } 00:15:01.962 ] 00:15:01.962 }' 00:15:01.963 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.963 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.222 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.222 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.222 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.222 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:02.481 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.481 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:02.481 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:02.481 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.481 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.481 [2024-11-19 12:35:07.531129] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:02.481 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.481 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:02.481 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.481 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.481 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.481 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.481 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:02.481 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.481 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.481 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.481 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.481 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.481 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.481 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.481 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.481 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.481 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.481 "name": "Existed_Raid", 00:15:02.481 "uuid": "d8d58eaf-4b19-4ae2-a007-a9e954cafdf8", 00:15:02.481 "strip_size_kb": 64, 00:15:02.481 "state": "configuring", 00:15:02.481 "raid_level": "raid5f", 00:15:02.481 "superblock": true, 00:15:02.481 "num_base_bdevs": 4, 00:15:02.481 "num_base_bdevs_discovered": 2, 00:15:02.481 "num_base_bdevs_operational": 4, 00:15:02.481 "base_bdevs_list": [ 00:15:02.481 { 00:15:02.481 "name": "BaseBdev1", 00:15:02.481 "uuid": "3c54f8fd-6682-4ea9-bc65-cd6a081dce35", 00:15:02.481 "is_configured": true, 00:15:02.481 "data_offset": 2048, 00:15:02.481 "data_size": 63488 00:15:02.481 }, 00:15:02.481 { 00:15:02.481 "name": null, 00:15:02.481 "uuid": "6b2bcea8-b732-413a-954d-6b79f926c192", 00:15:02.481 "is_configured": false, 00:15:02.481 "data_offset": 0, 00:15:02.481 "data_size": 63488 00:15:02.481 }, 00:15:02.481 { 00:15:02.481 "name": null, 00:15:02.481 "uuid": "6a2c1d74-a5ca-4b29-bbb3-c3f75aa3cd92", 00:15:02.481 "is_configured": false, 00:15:02.481 "data_offset": 0, 00:15:02.481 "data_size": 63488 00:15:02.481 }, 00:15:02.481 { 00:15:02.481 "name": "BaseBdev4", 00:15:02.481 "uuid": "c94ab0b7-28b3-4c97-9f81-b35b5585a8b8", 00:15:02.481 "is_configured": true, 00:15:02.481 "data_offset": 2048, 00:15:02.481 "data_size": 63488 00:15:02.481 } 00:15:02.481 ] 00:15:02.481 }' 00:15:02.481 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.481 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.741 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.741 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:02.741 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.741 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.741 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.741 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:02.741 12:35:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:02.741 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.741 12:35:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.000 [2024-11-19 12:35:08.002394] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:03.000 12:35:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.000 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:03.000 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.000 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.000 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.000 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.000 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:03.000 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.000 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.000 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.000 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.000 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.000 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.000 12:35:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.000 12:35:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.000 12:35:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.000 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.000 "name": "Existed_Raid", 00:15:03.000 "uuid": "d8d58eaf-4b19-4ae2-a007-a9e954cafdf8", 00:15:03.000 "strip_size_kb": 64, 00:15:03.000 "state": "configuring", 00:15:03.000 "raid_level": "raid5f", 00:15:03.000 "superblock": true, 00:15:03.000 "num_base_bdevs": 4, 00:15:03.000 "num_base_bdevs_discovered": 3, 00:15:03.000 "num_base_bdevs_operational": 4, 00:15:03.000 "base_bdevs_list": [ 00:15:03.000 { 00:15:03.000 "name": "BaseBdev1", 00:15:03.000 "uuid": "3c54f8fd-6682-4ea9-bc65-cd6a081dce35", 00:15:03.000 "is_configured": true, 00:15:03.000 "data_offset": 2048, 00:15:03.000 "data_size": 63488 00:15:03.000 }, 00:15:03.000 { 00:15:03.000 "name": null, 00:15:03.000 "uuid": "6b2bcea8-b732-413a-954d-6b79f926c192", 00:15:03.000 "is_configured": false, 00:15:03.000 "data_offset": 0, 00:15:03.000 "data_size": 63488 00:15:03.000 }, 00:15:03.000 { 00:15:03.000 "name": "BaseBdev3", 00:15:03.000 "uuid": "6a2c1d74-a5ca-4b29-bbb3-c3f75aa3cd92", 00:15:03.000 "is_configured": true, 00:15:03.000 "data_offset": 2048, 00:15:03.000 "data_size": 63488 00:15:03.000 }, 00:15:03.000 { 00:15:03.000 "name": "BaseBdev4", 00:15:03.000 "uuid": "c94ab0b7-28b3-4c97-9f81-b35b5585a8b8", 00:15:03.000 "is_configured": true, 00:15:03.000 "data_offset": 2048, 00:15:03.000 "data_size": 63488 00:15:03.000 } 00:15:03.000 ] 00:15:03.000 }' 00:15:03.000 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.000 12:35:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.259 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.259 12:35:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.259 12:35:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.259 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:03.259 12:35:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.259 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:03.259 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:03.259 12:35:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.259 12:35:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.259 [2024-11-19 12:35:08.501535] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:03.259 12:35:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.259 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:03.259 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.259 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.259 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.259 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.259 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:03.259 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.259 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.259 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.259 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.518 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.518 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.518 12:35:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.518 12:35:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.518 12:35:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.518 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.518 "name": "Existed_Raid", 00:15:03.518 "uuid": "d8d58eaf-4b19-4ae2-a007-a9e954cafdf8", 00:15:03.518 "strip_size_kb": 64, 00:15:03.518 "state": "configuring", 00:15:03.518 "raid_level": "raid5f", 00:15:03.518 "superblock": true, 00:15:03.518 "num_base_bdevs": 4, 00:15:03.518 "num_base_bdevs_discovered": 2, 00:15:03.518 "num_base_bdevs_operational": 4, 00:15:03.518 "base_bdevs_list": [ 00:15:03.518 { 00:15:03.518 "name": null, 00:15:03.518 "uuid": "3c54f8fd-6682-4ea9-bc65-cd6a081dce35", 00:15:03.518 "is_configured": false, 00:15:03.518 "data_offset": 0, 00:15:03.518 "data_size": 63488 00:15:03.518 }, 00:15:03.518 { 00:15:03.518 "name": null, 00:15:03.518 "uuid": "6b2bcea8-b732-413a-954d-6b79f926c192", 00:15:03.518 "is_configured": false, 00:15:03.518 "data_offset": 0, 00:15:03.518 "data_size": 63488 00:15:03.518 }, 00:15:03.518 { 00:15:03.518 "name": "BaseBdev3", 00:15:03.518 "uuid": "6a2c1d74-a5ca-4b29-bbb3-c3f75aa3cd92", 00:15:03.518 "is_configured": true, 00:15:03.518 "data_offset": 2048, 00:15:03.518 "data_size": 63488 00:15:03.518 }, 00:15:03.518 { 00:15:03.518 "name": "BaseBdev4", 00:15:03.518 "uuid": "c94ab0b7-28b3-4c97-9f81-b35b5585a8b8", 00:15:03.518 "is_configured": true, 00:15:03.518 "data_offset": 2048, 00:15:03.518 "data_size": 63488 00:15:03.518 } 00:15:03.518 ] 00:15:03.518 }' 00:15:03.518 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.518 12:35:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.777 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.777 12:35:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.777 12:35:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.777 12:35:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:03.777 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.777 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:03.777 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:03.777 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.777 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.038 [2024-11-19 12:35:09.034990] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:04.038 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.038 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:04.038 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.038 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:04.038 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.038 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.038 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:04.038 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.038 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.038 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.038 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.038 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.038 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.038 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.038 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.038 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.038 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.038 "name": "Existed_Raid", 00:15:04.038 "uuid": "d8d58eaf-4b19-4ae2-a007-a9e954cafdf8", 00:15:04.038 "strip_size_kb": 64, 00:15:04.038 "state": "configuring", 00:15:04.038 "raid_level": "raid5f", 00:15:04.038 "superblock": true, 00:15:04.038 "num_base_bdevs": 4, 00:15:04.038 "num_base_bdevs_discovered": 3, 00:15:04.038 "num_base_bdevs_operational": 4, 00:15:04.038 "base_bdevs_list": [ 00:15:04.038 { 00:15:04.038 "name": null, 00:15:04.038 "uuid": "3c54f8fd-6682-4ea9-bc65-cd6a081dce35", 00:15:04.038 "is_configured": false, 00:15:04.038 "data_offset": 0, 00:15:04.038 "data_size": 63488 00:15:04.038 }, 00:15:04.038 { 00:15:04.038 "name": "BaseBdev2", 00:15:04.038 "uuid": "6b2bcea8-b732-413a-954d-6b79f926c192", 00:15:04.038 "is_configured": true, 00:15:04.038 "data_offset": 2048, 00:15:04.038 "data_size": 63488 00:15:04.038 }, 00:15:04.038 { 00:15:04.038 "name": "BaseBdev3", 00:15:04.038 "uuid": "6a2c1d74-a5ca-4b29-bbb3-c3f75aa3cd92", 00:15:04.038 "is_configured": true, 00:15:04.038 "data_offset": 2048, 00:15:04.038 "data_size": 63488 00:15:04.038 }, 00:15:04.038 { 00:15:04.038 "name": "BaseBdev4", 00:15:04.038 "uuid": "c94ab0b7-28b3-4c97-9f81-b35b5585a8b8", 00:15:04.038 "is_configured": true, 00:15:04.038 "data_offset": 2048, 00:15:04.038 "data_size": 63488 00:15:04.038 } 00:15:04.038 ] 00:15:04.038 }' 00:15:04.038 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.038 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.303 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.303 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:04.303 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.303 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.303 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.303 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:04.303 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.303 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:04.303 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.303 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.303 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.303 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3c54f8fd-6682-4ea9-bc65-cd6a081dce35 00:15:04.303 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.303 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.303 NewBaseBdev 00:15:04.303 [2024-11-19 12:35:09.557175] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:04.303 [2024-11-19 12:35:09.557388] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:15:04.303 [2024-11-19 12:35:09.557402] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:04.303 [2024-11-19 12:35:09.557635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:04.303 [2024-11-19 12:35:09.558074] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:15:04.303 [2024-11-19 12:35:09.558093] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:15:04.303 [2024-11-19 12:35:09.558190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.303 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.303 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:04.303 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:04.303 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:04.303 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:04.303 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:04.303 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:04.303 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:04.303 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.303 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.562 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.562 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:04.562 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.562 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.562 [ 00:15:04.562 { 00:15:04.562 "name": "NewBaseBdev", 00:15:04.562 "aliases": [ 00:15:04.562 "3c54f8fd-6682-4ea9-bc65-cd6a081dce35" 00:15:04.562 ], 00:15:04.562 "product_name": "Malloc disk", 00:15:04.562 "block_size": 512, 00:15:04.562 "num_blocks": 65536, 00:15:04.562 "uuid": "3c54f8fd-6682-4ea9-bc65-cd6a081dce35", 00:15:04.562 "assigned_rate_limits": { 00:15:04.562 "rw_ios_per_sec": 0, 00:15:04.562 "rw_mbytes_per_sec": 0, 00:15:04.562 "r_mbytes_per_sec": 0, 00:15:04.562 "w_mbytes_per_sec": 0 00:15:04.562 }, 00:15:04.562 "claimed": true, 00:15:04.562 "claim_type": "exclusive_write", 00:15:04.562 "zoned": false, 00:15:04.562 "supported_io_types": { 00:15:04.562 "read": true, 00:15:04.562 "write": true, 00:15:04.562 "unmap": true, 00:15:04.562 "flush": true, 00:15:04.562 "reset": true, 00:15:04.562 "nvme_admin": false, 00:15:04.562 "nvme_io": false, 00:15:04.562 "nvme_io_md": false, 00:15:04.562 "write_zeroes": true, 00:15:04.562 "zcopy": true, 00:15:04.562 "get_zone_info": false, 00:15:04.562 "zone_management": false, 00:15:04.562 "zone_append": false, 00:15:04.563 "compare": false, 00:15:04.563 "compare_and_write": false, 00:15:04.563 "abort": true, 00:15:04.563 "seek_hole": false, 00:15:04.563 "seek_data": false, 00:15:04.563 "copy": true, 00:15:04.563 "nvme_iov_md": false 00:15:04.563 }, 00:15:04.563 "memory_domains": [ 00:15:04.563 { 00:15:04.563 "dma_device_id": "system", 00:15:04.563 "dma_device_type": 1 00:15:04.563 }, 00:15:04.563 { 00:15:04.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.563 "dma_device_type": 2 00:15:04.563 } 00:15:04.563 ], 00:15:04.563 "driver_specific": {} 00:15:04.563 } 00:15:04.563 ] 00:15:04.563 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.563 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:04.563 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:04.563 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.563 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.563 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.563 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.563 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:04.563 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.563 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.563 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.563 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.563 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.563 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.563 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.563 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.563 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.563 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.563 "name": "Existed_Raid", 00:15:04.563 "uuid": "d8d58eaf-4b19-4ae2-a007-a9e954cafdf8", 00:15:04.563 "strip_size_kb": 64, 00:15:04.563 "state": "online", 00:15:04.563 "raid_level": "raid5f", 00:15:04.563 "superblock": true, 00:15:04.563 "num_base_bdevs": 4, 00:15:04.563 "num_base_bdevs_discovered": 4, 00:15:04.563 "num_base_bdevs_operational": 4, 00:15:04.563 "base_bdevs_list": [ 00:15:04.563 { 00:15:04.563 "name": "NewBaseBdev", 00:15:04.563 "uuid": "3c54f8fd-6682-4ea9-bc65-cd6a081dce35", 00:15:04.563 "is_configured": true, 00:15:04.563 "data_offset": 2048, 00:15:04.563 "data_size": 63488 00:15:04.563 }, 00:15:04.563 { 00:15:04.563 "name": "BaseBdev2", 00:15:04.563 "uuid": "6b2bcea8-b732-413a-954d-6b79f926c192", 00:15:04.563 "is_configured": true, 00:15:04.563 "data_offset": 2048, 00:15:04.563 "data_size": 63488 00:15:04.563 }, 00:15:04.563 { 00:15:04.563 "name": "BaseBdev3", 00:15:04.563 "uuid": "6a2c1d74-a5ca-4b29-bbb3-c3f75aa3cd92", 00:15:04.563 "is_configured": true, 00:15:04.563 "data_offset": 2048, 00:15:04.563 "data_size": 63488 00:15:04.563 }, 00:15:04.563 { 00:15:04.563 "name": "BaseBdev4", 00:15:04.563 "uuid": "c94ab0b7-28b3-4c97-9f81-b35b5585a8b8", 00:15:04.563 "is_configured": true, 00:15:04.563 "data_offset": 2048, 00:15:04.563 "data_size": 63488 00:15:04.563 } 00:15:04.563 ] 00:15:04.563 }' 00:15:04.563 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.563 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.822 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:04.822 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:04.822 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:04.822 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:04.822 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:04.822 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:04.822 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:04.822 12:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:04.822 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.822 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.822 [2024-11-19 12:35:09.976831] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.822 12:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.822 12:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:04.822 "name": "Existed_Raid", 00:15:04.822 "aliases": [ 00:15:04.822 "d8d58eaf-4b19-4ae2-a007-a9e954cafdf8" 00:15:04.822 ], 00:15:04.822 "product_name": "Raid Volume", 00:15:04.822 "block_size": 512, 00:15:04.822 "num_blocks": 190464, 00:15:04.822 "uuid": "d8d58eaf-4b19-4ae2-a007-a9e954cafdf8", 00:15:04.822 "assigned_rate_limits": { 00:15:04.822 "rw_ios_per_sec": 0, 00:15:04.822 "rw_mbytes_per_sec": 0, 00:15:04.822 "r_mbytes_per_sec": 0, 00:15:04.822 "w_mbytes_per_sec": 0 00:15:04.822 }, 00:15:04.822 "claimed": false, 00:15:04.822 "zoned": false, 00:15:04.822 "supported_io_types": { 00:15:04.822 "read": true, 00:15:04.822 "write": true, 00:15:04.822 "unmap": false, 00:15:04.822 "flush": false, 00:15:04.822 "reset": true, 00:15:04.822 "nvme_admin": false, 00:15:04.822 "nvme_io": false, 00:15:04.822 "nvme_io_md": false, 00:15:04.822 "write_zeroes": true, 00:15:04.822 "zcopy": false, 00:15:04.822 "get_zone_info": false, 00:15:04.822 "zone_management": false, 00:15:04.822 "zone_append": false, 00:15:04.822 "compare": false, 00:15:04.822 "compare_and_write": false, 00:15:04.822 "abort": false, 00:15:04.822 "seek_hole": false, 00:15:04.822 "seek_data": false, 00:15:04.822 "copy": false, 00:15:04.822 "nvme_iov_md": false 00:15:04.822 }, 00:15:04.822 "driver_specific": { 00:15:04.822 "raid": { 00:15:04.822 "uuid": "d8d58eaf-4b19-4ae2-a007-a9e954cafdf8", 00:15:04.822 "strip_size_kb": 64, 00:15:04.822 "state": "online", 00:15:04.822 "raid_level": "raid5f", 00:15:04.822 "superblock": true, 00:15:04.822 "num_base_bdevs": 4, 00:15:04.822 "num_base_bdevs_discovered": 4, 00:15:04.822 "num_base_bdevs_operational": 4, 00:15:04.822 "base_bdevs_list": [ 00:15:04.822 { 00:15:04.822 "name": "NewBaseBdev", 00:15:04.822 "uuid": "3c54f8fd-6682-4ea9-bc65-cd6a081dce35", 00:15:04.822 "is_configured": true, 00:15:04.822 "data_offset": 2048, 00:15:04.822 "data_size": 63488 00:15:04.822 }, 00:15:04.822 { 00:15:04.822 "name": "BaseBdev2", 00:15:04.822 "uuid": "6b2bcea8-b732-413a-954d-6b79f926c192", 00:15:04.822 "is_configured": true, 00:15:04.822 "data_offset": 2048, 00:15:04.822 "data_size": 63488 00:15:04.822 }, 00:15:04.822 { 00:15:04.822 "name": "BaseBdev3", 00:15:04.822 "uuid": "6a2c1d74-a5ca-4b29-bbb3-c3f75aa3cd92", 00:15:04.822 "is_configured": true, 00:15:04.822 "data_offset": 2048, 00:15:04.822 "data_size": 63488 00:15:04.822 }, 00:15:04.822 { 00:15:04.823 "name": "BaseBdev4", 00:15:04.823 "uuid": "c94ab0b7-28b3-4c97-9f81-b35b5585a8b8", 00:15:04.823 "is_configured": true, 00:15:04.823 "data_offset": 2048, 00:15:04.823 "data_size": 63488 00:15:04.823 } 00:15:04.823 ] 00:15:04.823 } 00:15:04.823 } 00:15:04.823 }' 00:15:04.823 12:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:04.823 12:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:04.823 BaseBdev2 00:15:04.823 BaseBdev3 00:15:04.823 BaseBdev4' 00:15:04.823 12:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.082 12:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.082 [2024-11-19 12:35:10.304038] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:05.082 [2024-11-19 12:35:10.304082] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:05.082 [2024-11-19 12:35:10.304173] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:05.083 [2024-11-19 12:35:10.304429] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:05.083 [2024-11-19 12:35:10.304452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:15:05.083 12:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.083 12:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 94119 00:15:05.083 12:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 94119 ']' 00:15:05.083 12:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 94119 00:15:05.083 12:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:05.083 12:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:05.083 12:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94119 00:15:05.342 killing process with pid 94119 00:15:05.342 12:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:05.342 12:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:05.342 12:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94119' 00:15:05.342 12:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 94119 00:15:05.342 [2024-11-19 12:35:10.356983] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:05.342 12:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 94119 00:15:05.342 [2024-11-19 12:35:10.398407] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:05.602 12:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:05.602 00:15:05.602 real 0m9.633s 00:15:05.602 user 0m16.333s 00:15:05.602 sys 0m2.201s 00:15:05.602 12:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:05.602 ************************************ 00:15:05.602 END TEST raid5f_state_function_test_sb 00:15:05.602 ************************************ 00:15:05.602 12:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.602 12:35:10 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:15:05.602 12:35:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:05.602 12:35:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:05.602 12:35:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:05.602 ************************************ 00:15:05.602 START TEST raid5f_superblock_test 00:15:05.602 ************************************ 00:15:05.602 12:35:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:15:05.602 12:35:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:05.602 12:35:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:05.602 12:35:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:05.602 12:35:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:05.602 12:35:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:05.602 12:35:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:05.602 12:35:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:05.602 12:35:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:05.602 12:35:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:05.602 12:35:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:05.602 12:35:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:05.602 12:35:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:05.602 12:35:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:05.602 12:35:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:05.602 12:35:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:05.602 12:35:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:05.602 12:35:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=94773 00:15:05.602 12:35:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:05.602 12:35:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 94773 00:15:05.602 12:35:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 94773 ']' 00:15:05.602 12:35:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.602 12:35:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:05.602 12:35:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.602 12:35:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:05.602 12:35:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.602 [2024-11-19 12:35:10.803130] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:05.602 [2024-11-19 12:35:10.803369] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94773 ] 00:15:05.862 [2024-11-19 12:35:10.964655] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.862 [2024-11-19 12:35:11.016244] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.862 [2024-11-19 12:35:11.057888] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:05.862 [2024-11-19 12:35:11.057932] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:06.430 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:06.430 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:15:06.430 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:06.430 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:06.430 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:06.430 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:06.430 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:06.430 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:06.430 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:06.430 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:06.430 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:06.430 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.430 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.430 malloc1 00:15:06.430 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.430 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:06.430 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.430 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.690 [2024-11-19 12:35:11.692210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:06.690 [2024-11-19 12:35:11.692403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.690 [2024-11-19 12:35:11.692448] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:06.690 [2024-11-19 12:35:11.692486] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.690 [2024-11-19 12:35:11.694677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.690 [2024-11-19 12:35:11.694792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:06.690 pt1 00:15:06.690 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.690 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:06.690 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:06.690 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:06.690 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:06.690 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:06.690 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:06.690 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:06.690 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:06.690 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:06.690 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.690 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.690 malloc2 00:15:06.690 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.690 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:06.690 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.690 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.690 [2024-11-19 12:35:11.735306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:06.690 [2024-11-19 12:35:11.735472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.690 [2024-11-19 12:35:11.735497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:06.690 [2024-11-19 12:35:11.735511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.690 [2024-11-19 12:35:11.738070] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.690 [2024-11-19 12:35:11.738109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:06.690 pt2 00:15:06.690 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.690 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:06.690 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:06.690 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:06.690 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:06.690 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:06.690 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:06.690 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:06.690 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:06.690 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:06.690 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.690 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.690 malloc3 00:15:06.690 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.690 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:06.690 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.690 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.690 [2024-11-19 12:35:11.763932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:06.690 [2024-11-19 12:35:11.764081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.691 [2024-11-19 12:35:11.764118] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:06.691 [2024-11-19 12:35:11.764148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.691 [2024-11-19 12:35:11.766257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.691 [2024-11-19 12:35:11.766331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:06.691 pt3 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.691 malloc4 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.691 [2024-11-19 12:35:11.796557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:06.691 [2024-11-19 12:35:11.796706] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.691 [2024-11-19 12:35:11.796740] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:06.691 [2024-11-19 12:35:11.796782] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.691 [2024-11-19 12:35:11.798865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.691 [2024-11-19 12:35:11.798941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:06.691 pt4 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.691 [2024-11-19 12:35:11.808629] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:06.691 [2024-11-19 12:35:11.810492] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:06.691 [2024-11-19 12:35:11.810551] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:06.691 [2024-11-19 12:35:11.810610] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:06.691 [2024-11-19 12:35:11.810799] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:06.691 [2024-11-19 12:35:11.810814] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:06.691 [2024-11-19 12:35:11.811078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:06.691 [2024-11-19 12:35:11.811545] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:06.691 [2024-11-19 12:35:11.811556] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:06.691 [2024-11-19 12:35:11.811696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.691 "name": "raid_bdev1", 00:15:06.691 "uuid": "2c58c0ce-4213-40af-8b05-1d408dd19c13", 00:15:06.691 "strip_size_kb": 64, 00:15:06.691 "state": "online", 00:15:06.691 "raid_level": "raid5f", 00:15:06.691 "superblock": true, 00:15:06.691 "num_base_bdevs": 4, 00:15:06.691 "num_base_bdevs_discovered": 4, 00:15:06.691 "num_base_bdevs_operational": 4, 00:15:06.691 "base_bdevs_list": [ 00:15:06.691 { 00:15:06.691 "name": "pt1", 00:15:06.691 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:06.691 "is_configured": true, 00:15:06.691 "data_offset": 2048, 00:15:06.691 "data_size": 63488 00:15:06.691 }, 00:15:06.691 { 00:15:06.691 "name": "pt2", 00:15:06.691 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:06.691 "is_configured": true, 00:15:06.691 "data_offset": 2048, 00:15:06.691 "data_size": 63488 00:15:06.691 }, 00:15:06.691 { 00:15:06.691 "name": "pt3", 00:15:06.691 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:06.691 "is_configured": true, 00:15:06.691 "data_offset": 2048, 00:15:06.691 "data_size": 63488 00:15:06.691 }, 00:15:06.691 { 00:15:06.691 "name": "pt4", 00:15:06.691 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:06.691 "is_configured": true, 00:15:06.691 "data_offset": 2048, 00:15:06.691 "data_size": 63488 00:15:06.691 } 00:15:06.691 ] 00:15:06.691 }' 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.691 12:35:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.260 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:07.260 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:07.260 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:07.260 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:07.260 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:07.260 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:07.260 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:07.260 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:07.260 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.260 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.260 [2024-11-19 12:35:12.300796] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:07.260 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.260 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:07.260 "name": "raid_bdev1", 00:15:07.260 "aliases": [ 00:15:07.260 "2c58c0ce-4213-40af-8b05-1d408dd19c13" 00:15:07.260 ], 00:15:07.260 "product_name": "Raid Volume", 00:15:07.260 "block_size": 512, 00:15:07.260 "num_blocks": 190464, 00:15:07.260 "uuid": "2c58c0ce-4213-40af-8b05-1d408dd19c13", 00:15:07.260 "assigned_rate_limits": { 00:15:07.260 "rw_ios_per_sec": 0, 00:15:07.260 "rw_mbytes_per_sec": 0, 00:15:07.260 "r_mbytes_per_sec": 0, 00:15:07.260 "w_mbytes_per_sec": 0 00:15:07.260 }, 00:15:07.260 "claimed": false, 00:15:07.260 "zoned": false, 00:15:07.260 "supported_io_types": { 00:15:07.260 "read": true, 00:15:07.260 "write": true, 00:15:07.260 "unmap": false, 00:15:07.260 "flush": false, 00:15:07.260 "reset": true, 00:15:07.260 "nvme_admin": false, 00:15:07.260 "nvme_io": false, 00:15:07.260 "nvme_io_md": false, 00:15:07.260 "write_zeroes": true, 00:15:07.260 "zcopy": false, 00:15:07.260 "get_zone_info": false, 00:15:07.260 "zone_management": false, 00:15:07.260 "zone_append": false, 00:15:07.260 "compare": false, 00:15:07.260 "compare_and_write": false, 00:15:07.260 "abort": false, 00:15:07.260 "seek_hole": false, 00:15:07.260 "seek_data": false, 00:15:07.260 "copy": false, 00:15:07.260 "nvme_iov_md": false 00:15:07.260 }, 00:15:07.260 "driver_specific": { 00:15:07.260 "raid": { 00:15:07.260 "uuid": "2c58c0ce-4213-40af-8b05-1d408dd19c13", 00:15:07.260 "strip_size_kb": 64, 00:15:07.260 "state": "online", 00:15:07.260 "raid_level": "raid5f", 00:15:07.260 "superblock": true, 00:15:07.260 "num_base_bdevs": 4, 00:15:07.260 "num_base_bdevs_discovered": 4, 00:15:07.260 "num_base_bdevs_operational": 4, 00:15:07.260 "base_bdevs_list": [ 00:15:07.260 { 00:15:07.260 "name": "pt1", 00:15:07.260 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:07.260 "is_configured": true, 00:15:07.260 "data_offset": 2048, 00:15:07.260 "data_size": 63488 00:15:07.260 }, 00:15:07.260 { 00:15:07.260 "name": "pt2", 00:15:07.260 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:07.260 "is_configured": true, 00:15:07.260 "data_offset": 2048, 00:15:07.260 "data_size": 63488 00:15:07.260 }, 00:15:07.260 { 00:15:07.260 "name": "pt3", 00:15:07.260 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:07.260 "is_configured": true, 00:15:07.260 "data_offset": 2048, 00:15:07.260 "data_size": 63488 00:15:07.260 }, 00:15:07.260 { 00:15:07.260 "name": "pt4", 00:15:07.260 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:07.260 "is_configured": true, 00:15:07.260 "data_offset": 2048, 00:15:07.260 "data_size": 63488 00:15:07.260 } 00:15:07.260 ] 00:15:07.260 } 00:15:07.260 } 00:15:07.260 }' 00:15:07.261 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:07.261 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:07.261 pt2 00:15:07.261 pt3 00:15:07.261 pt4' 00:15:07.261 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.261 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:07.261 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:07.261 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:07.261 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.261 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.261 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.261 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.261 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:07.261 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:07.261 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:07.261 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:07.261 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.261 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.261 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.261 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.520 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:07.520 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:07.520 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:07.520 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.520 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:07.520 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.520 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.520 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.520 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:07.520 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:07.520 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:07.520 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.520 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:07.520 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.520 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.520 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.520 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:07.520 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:07.520 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:07.520 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.520 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:07.520 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.520 [2024-11-19 12:35:12.648181] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:07.520 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.520 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2c58c0ce-4213-40af-8b05-1d408dd19c13 00:15:07.520 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2c58c0ce-4213-40af-8b05-1d408dd19c13 ']' 00:15:07.520 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:07.520 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.520 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.520 [2024-11-19 12:35:12.695922] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:07.521 [2024-11-19 12:35:12.695965] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:07.521 [2024-11-19 12:35:12.696063] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.521 [2024-11-19 12:35:12.696174] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:07.521 [2024-11-19 12:35:12.696187] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:07.521 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.521 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.521 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.521 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.521 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:07.521 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.521 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:07.521 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:07.521 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:07.521 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:07.521 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.521 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.521 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.521 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:07.521 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:07.521 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.521 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.521 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.521 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:07.521 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:07.521 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.521 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.780 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.780 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:07.780 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:07.780 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.780 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.780 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.780 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:07.780 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:07.780 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.780 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.780 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.780 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:07.780 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:07.780 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:07.780 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:07.780 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:07.780 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:07.780 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:07.780 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:07.780 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:07.780 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.780 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.781 [2024-11-19 12:35:12.859826] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:07.781 [2024-11-19 12:35:12.861821] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:07.781 [2024-11-19 12:35:12.861917] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:07.781 [2024-11-19 12:35:12.861966] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:07.781 [2024-11-19 12:35:12.862039] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:07.781 [2024-11-19 12:35:12.862154] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:07.781 [2024-11-19 12:35:12.862236] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:07.781 [2024-11-19 12:35:12.862291] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:07.781 [2024-11-19 12:35:12.862342] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:07.781 [2024-11-19 12:35:12.862375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:15:07.781 request: 00:15:07.781 { 00:15:07.781 "name": "raid_bdev1", 00:15:07.781 "raid_level": "raid5f", 00:15:07.781 "base_bdevs": [ 00:15:07.781 "malloc1", 00:15:07.781 "malloc2", 00:15:07.781 "malloc3", 00:15:07.781 "malloc4" 00:15:07.781 ], 00:15:07.781 "strip_size_kb": 64, 00:15:07.781 "superblock": false, 00:15:07.781 "method": "bdev_raid_create", 00:15:07.781 "req_id": 1 00:15:07.781 } 00:15:07.781 Got JSON-RPC error response 00:15:07.781 response: 00:15:07.781 { 00:15:07.781 "code": -17, 00:15:07.781 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:07.781 } 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.781 [2024-11-19 12:35:12.931649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:07.781 [2024-11-19 12:35:12.931748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.781 [2024-11-19 12:35:12.931782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:07.781 [2024-11-19 12:35:12.931791] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.781 [2024-11-19 12:35:12.933969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.781 [2024-11-19 12:35:12.934006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:07.781 [2024-11-19 12:35:12.934099] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:07.781 [2024-11-19 12:35:12.934146] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:07.781 pt1 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.781 "name": "raid_bdev1", 00:15:07.781 "uuid": "2c58c0ce-4213-40af-8b05-1d408dd19c13", 00:15:07.781 "strip_size_kb": 64, 00:15:07.781 "state": "configuring", 00:15:07.781 "raid_level": "raid5f", 00:15:07.781 "superblock": true, 00:15:07.781 "num_base_bdevs": 4, 00:15:07.781 "num_base_bdevs_discovered": 1, 00:15:07.781 "num_base_bdevs_operational": 4, 00:15:07.781 "base_bdevs_list": [ 00:15:07.781 { 00:15:07.781 "name": "pt1", 00:15:07.781 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:07.781 "is_configured": true, 00:15:07.781 "data_offset": 2048, 00:15:07.781 "data_size": 63488 00:15:07.781 }, 00:15:07.781 { 00:15:07.781 "name": null, 00:15:07.781 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:07.781 "is_configured": false, 00:15:07.781 "data_offset": 2048, 00:15:07.781 "data_size": 63488 00:15:07.781 }, 00:15:07.781 { 00:15:07.781 "name": null, 00:15:07.781 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:07.781 "is_configured": false, 00:15:07.781 "data_offset": 2048, 00:15:07.781 "data_size": 63488 00:15:07.781 }, 00:15:07.781 { 00:15:07.781 "name": null, 00:15:07.781 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:07.781 "is_configured": false, 00:15:07.781 "data_offset": 2048, 00:15:07.781 "data_size": 63488 00:15:07.781 } 00:15:07.781 ] 00:15:07.781 }' 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.781 12:35:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.368 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:08.368 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:08.368 12:35:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.368 12:35:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.368 [2024-11-19 12:35:13.358904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:08.368 [2024-11-19 12:35:13.359083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.368 [2024-11-19 12:35:13.359127] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:08.368 [2024-11-19 12:35:13.359157] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.368 [2024-11-19 12:35:13.359610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.368 [2024-11-19 12:35:13.359667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:08.368 [2024-11-19 12:35:13.359791] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:08.368 [2024-11-19 12:35:13.359844] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:08.368 pt2 00:15:08.368 12:35:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.368 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:08.368 12:35:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.368 12:35:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.368 [2024-11-19 12:35:13.366904] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:08.368 12:35:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.368 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:08.368 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.368 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:08.368 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:08.368 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.368 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:08.368 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.368 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.368 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.368 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.368 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.368 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.368 12:35:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.368 12:35:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.368 12:35:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.368 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.368 "name": "raid_bdev1", 00:15:08.368 "uuid": "2c58c0ce-4213-40af-8b05-1d408dd19c13", 00:15:08.368 "strip_size_kb": 64, 00:15:08.369 "state": "configuring", 00:15:08.369 "raid_level": "raid5f", 00:15:08.369 "superblock": true, 00:15:08.369 "num_base_bdevs": 4, 00:15:08.369 "num_base_bdevs_discovered": 1, 00:15:08.369 "num_base_bdevs_operational": 4, 00:15:08.369 "base_bdevs_list": [ 00:15:08.369 { 00:15:08.369 "name": "pt1", 00:15:08.369 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:08.369 "is_configured": true, 00:15:08.369 "data_offset": 2048, 00:15:08.369 "data_size": 63488 00:15:08.369 }, 00:15:08.369 { 00:15:08.369 "name": null, 00:15:08.369 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:08.369 "is_configured": false, 00:15:08.369 "data_offset": 0, 00:15:08.369 "data_size": 63488 00:15:08.369 }, 00:15:08.369 { 00:15:08.369 "name": null, 00:15:08.369 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:08.369 "is_configured": false, 00:15:08.369 "data_offset": 2048, 00:15:08.369 "data_size": 63488 00:15:08.369 }, 00:15:08.369 { 00:15:08.369 "name": null, 00:15:08.369 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:08.369 "is_configured": false, 00:15:08.369 "data_offset": 2048, 00:15:08.369 "data_size": 63488 00:15:08.369 } 00:15:08.369 ] 00:15:08.369 }' 00:15:08.369 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.369 12:35:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.641 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:08.641 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:08.641 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:08.641 12:35:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.641 12:35:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.641 [2024-11-19 12:35:13.814353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:08.641 [2024-11-19 12:35:13.814448] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.641 [2024-11-19 12:35:13.814468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:08.641 [2024-11-19 12:35:13.814478] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.641 [2024-11-19 12:35:13.814918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.641 [2024-11-19 12:35:13.814939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:08.641 [2024-11-19 12:35:13.815017] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:08.641 [2024-11-19 12:35:13.815039] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:08.641 pt2 00:15:08.641 12:35:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.641 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:08.641 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:08.641 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:08.641 12:35:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.641 12:35:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.641 [2024-11-19 12:35:13.826268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:08.641 [2024-11-19 12:35:13.826339] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.641 [2024-11-19 12:35:13.826359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:08.641 [2024-11-19 12:35:13.826369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.641 [2024-11-19 12:35:13.826783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.641 [2024-11-19 12:35:13.826805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:08.641 [2024-11-19 12:35:13.826876] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:08.641 [2024-11-19 12:35:13.826897] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:08.641 pt3 00:15:08.641 12:35:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.641 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:08.641 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:08.641 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:08.642 12:35:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.642 12:35:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.642 [2024-11-19 12:35:13.838260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:08.642 [2024-11-19 12:35:13.838344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.642 [2024-11-19 12:35:13.838363] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:08.642 [2024-11-19 12:35:13.838374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.642 [2024-11-19 12:35:13.838737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.642 [2024-11-19 12:35:13.838773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:08.642 [2024-11-19 12:35:13.838841] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:08.642 [2024-11-19 12:35:13.838864] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:08.642 [2024-11-19 12:35:13.838976] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:08.642 [2024-11-19 12:35:13.838987] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:08.642 [2024-11-19 12:35:13.839218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:08.642 [2024-11-19 12:35:13.839708] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:08.642 [2024-11-19 12:35:13.839725] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:15:08.642 [2024-11-19 12:35:13.839850] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.642 pt4 00:15:08.642 12:35:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.642 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:08.642 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:08.642 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:08.642 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.642 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.642 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:08.642 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.642 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:08.642 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.642 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.642 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.642 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.642 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.642 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.642 12:35:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.642 12:35:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.642 12:35:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.642 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.642 "name": "raid_bdev1", 00:15:08.642 "uuid": "2c58c0ce-4213-40af-8b05-1d408dd19c13", 00:15:08.642 "strip_size_kb": 64, 00:15:08.642 "state": "online", 00:15:08.642 "raid_level": "raid5f", 00:15:08.642 "superblock": true, 00:15:08.642 "num_base_bdevs": 4, 00:15:08.642 "num_base_bdevs_discovered": 4, 00:15:08.642 "num_base_bdevs_operational": 4, 00:15:08.642 "base_bdevs_list": [ 00:15:08.642 { 00:15:08.642 "name": "pt1", 00:15:08.642 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:08.642 "is_configured": true, 00:15:08.642 "data_offset": 2048, 00:15:08.642 "data_size": 63488 00:15:08.642 }, 00:15:08.642 { 00:15:08.642 "name": "pt2", 00:15:08.642 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:08.642 "is_configured": true, 00:15:08.642 "data_offset": 2048, 00:15:08.642 "data_size": 63488 00:15:08.642 }, 00:15:08.642 { 00:15:08.642 "name": "pt3", 00:15:08.642 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:08.642 "is_configured": true, 00:15:08.642 "data_offset": 2048, 00:15:08.642 "data_size": 63488 00:15:08.642 }, 00:15:08.642 { 00:15:08.642 "name": "pt4", 00:15:08.642 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:08.642 "is_configured": true, 00:15:08.642 "data_offset": 2048, 00:15:08.642 "data_size": 63488 00:15:08.642 } 00:15:08.642 ] 00:15:08.642 }' 00:15:08.642 12:35:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.642 12:35:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.211 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:09.211 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:09.211 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:09.211 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:09.211 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:09.211 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:09.211 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:09.211 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:09.211 12:35:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.211 12:35:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.211 [2024-11-19 12:35:14.249870] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:09.211 12:35:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.211 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:09.211 "name": "raid_bdev1", 00:15:09.211 "aliases": [ 00:15:09.211 "2c58c0ce-4213-40af-8b05-1d408dd19c13" 00:15:09.211 ], 00:15:09.211 "product_name": "Raid Volume", 00:15:09.211 "block_size": 512, 00:15:09.211 "num_blocks": 190464, 00:15:09.211 "uuid": "2c58c0ce-4213-40af-8b05-1d408dd19c13", 00:15:09.211 "assigned_rate_limits": { 00:15:09.211 "rw_ios_per_sec": 0, 00:15:09.211 "rw_mbytes_per_sec": 0, 00:15:09.211 "r_mbytes_per_sec": 0, 00:15:09.211 "w_mbytes_per_sec": 0 00:15:09.211 }, 00:15:09.211 "claimed": false, 00:15:09.211 "zoned": false, 00:15:09.211 "supported_io_types": { 00:15:09.211 "read": true, 00:15:09.211 "write": true, 00:15:09.211 "unmap": false, 00:15:09.211 "flush": false, 00:15:09.211 "reset": true, 00:15:09.211 "nvme_admin": false, 00:15:09.211 "nvme_io": false, 00:15:09.211 "nvme_io_md": false, 00:15:09.211 "write_zeroes": true, 00:15:09.211 "zcopy": false, 00:15:09.211 "get_zone_info": false, 00:15:09.211 "zone_management": false, 00:15:09.211 "zone_append": false, 00:15:09.211 "compare": false, 00:15:09.211 "compare_and_write": false, 00:15:09.211 "abort": false, 00:15:09.211 "seek_hole": false, 00:15:09.211 "seek_data": false, 00:15:09.211 "copy": false, 00:15:09.211 "nvme_iov_md": false 00:15:09.211 }, 00:15:09.211 "driver_specific": { 00:15:09.211 "raid": { 00:15:09.211 "uuid": "2c58c0ce-4213-40af-8b05-1d408dd19c13", 00:15:09.211 "strip_size_kb": 64, 00:15:09.211 "state": "online", 00:15:09.211 "raid_level": "raid5f", 00:15:09.211 "superblock": true, 00:15:09.211 "num_base_bdevs": 4, 00:15:09.211 "num_base_bdevs_discovered": 4, 00:15:09.211 "num_base_bdevs_operational": 4, 00:15:09.211 "base_bdevs_list": [ 00:15:09.211 { 00:15:09.211 "name": "pt1", 00:15:09.211 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:09.211 "is_configured": true, 00:15:09.211 "data_offset": 2048, 00:15:09.211 "data_size": 63488 00:15:09.211 }, 00:15:09.211 { 00:15:09.211 "name": "pt2", 00:15:09.211 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:09.211 "is_configured": true, 00:15:09.211 "data_offset": 2048, 00:15:09.211 "data_size": 63488 00:15:09.211 }, 00:15:09.211 { 00:15:09.211 "name": "pt3", 00:15:09.211 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:09.211 "is_configured": true, 00:15:09.211 "data_offset": 2048, 00:15:09.211 "data_size": 63488 00:15:09.211 }, 00:15:09.211 { 00:15:09.211 "name": "pt4", 00:15:09.212 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:09.212 "is_configured": true, 00:15:09.212 "data_offset": 2048, 00:15:09.212 "data_size": 63488 00:15:09.212 } 00:15:09.212 ] 00:15:09.212 } 00:15:09.212 } 00:15:09.212 }' 00:15:09.212 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:09.212 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:09.212 pt2 00:15:09.212 pt3 00:15:09.212 pt4' 00:15:09.212 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.212 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:09.212 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.212 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:09.212 12:35:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.212 12:35:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.212 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.212 12:35:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.212 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.212 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.212 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.212 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:09.212 12:35:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.212 12:35:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.212 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.212 12:35:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.471 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.472 [2024-11-19 12:35:14.593259] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2c58c0ce-4213-40af-8b05-1d408dd19c13 '!=' 2c58c0ce-4213-40af-8b05-1d408dd19c13 ']' 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.472 [2024-11-19 12:35:14.641039] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.472 "name": "raid_bdev1", 00:15:09.472 "uuid": "2c58c0ce-4213-40af-8b05-1d408dd19c13", 00:15:09.472 "strip_size_kb": 64, 00:15:09.472 "state": "online", 00:15:09.472 "raid_level": "raid5f", 00:15:09.472 "superblock": true, 00:15:09.472 "num_base_bdevs": 4, 00:15:09.472 "num_base_bdevs_discovered": 3, 00:15:09.472 "num_base_bdevs_operational": 3, 00:15:09.472 "base_bdevs_list": [ 00:15:09.472 { 00:15:09.472 "name": null, 00:15:09.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.472 "is_configured": false, 00:15:09.472 "data_offset": 0, 00:15:09.472 "data_size": 63488 00:15:09.472 }, 00:15:09.472 { 00:15:09.472 "name": "pt2", 00:15:09.472 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:09.472 "is_configured": true, 00:15:09.472 "data_offset": 2048, 00:15:09.472 "data_size": 63488 00:15:09.472 }, 00:15:09.472 { 00:15:09.472 "name": "pt3", 00:15:09.472 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:09.472 "is_configured": true, 00:15:09.472 "data_offset": 2048, 00:15:09.472 "data_size": 63488 00:15:09.472 }, 00:15:09.472 { 00:15:09.472 "name": "pt4", 00:15:09.472 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:09.472 "is_configured": true, 00:15:09.472 "data_offset": 2048, 00:15:09.472 "data_size": 63488 00:15:09.472 } 00:15:09.472 ] 00:15:09.472 }' 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.472 12:35:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.041 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:10.041 12:35:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.041 12:35:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.041 [2024-11-19 12:35:15.032295] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:10.041 [2024-11-19 12:35:15.032344] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:10.041 [2024-11-19 12:35:15.032443] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:10.041 [2024-11-19 12:35:15.032513] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:10.041 [2024-11-19 12:35:15.032525] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:15:10.041 12:35:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.041 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.041 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.042 [2024-11-19 12:35:15.128089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:10.042 [2024-11-19 12:35:15.128261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.042 [2024-11-19 12:35:15.128286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:10.042 [2024-11-19 12:35:15.128296] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.042 [2024-11-19 12:35:15.130419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.042 [2024-11-19 12:35:15.130466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:10.042 [2024-11-19 12:35:15.130545] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:10.042 [2024-11-19 12:35:15.130581] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:10.042 pt2 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.042 "name": "raid_bdev1", 00:15:10.042 "uuid": "2c58c0ce-4213-40af-8b05-1d408dd19c13", 00:15:10.042 "strip_size_kb": 64, 00:15:10.042 "state": "configuring", 00:15:10.042 "raid_level": "raid5f", 00:15:10.042 "superblock": true, 00:15:10.042 "num_base_bdevs": 4, 00:15:10.042 "num_base_bdevs_discovered": 1, 00:15:10.042 "num_base_bdevs_operational": 3, 00:15:10.042 "base_bdevs_list": [ 00:15:10.042 { 00:15:10.042 "name": null, 00:15:10.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.042 "is_configured": false, 00:15:10.042 "data_offset": 2048, 00:15:10.042 "data_size": 63488 00:15:10.042 }, 00:15:10.042 { 00:15:10.042 "name": "pt2", 00:15:10.042 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.042 "is_configured": true, 00:15:10.042 "data_offset": 2048, 00:15:10.042 "data_size": 63488 00:15:10.042 }, 00:15:10.042 { 00:15:10.042 "name": null, 00:15:10.042 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:10.042 "is_configured": false, 00:15:10.042 "data_offset": 2048, 00:15:10.042 "data_size": 63488 00:15:10.042 }, 00:15:10.042 { 00:15:10.042 "name": null, 00:15:10.042 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:10.042 "is_configured": false, 00:15:10.042 "data_offset": 2048, 00:15:10.042 "data_size": 63488 00:15:10.042 } 00:15:10.042 ] 00:15:10.042 }' 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.042 12:35:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.610 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:10.610 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:10.610 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:10.610 12:35:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.610 12:35:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.610 [2024-11-19 12:35:15.575377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:10.610 [2024-11-19 12:35:15.575578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.610 [2024-11-19 12:35:15.575619] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:10.610 [2024-11-19 12:35:15.575653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.610 [2024-11-19 12:35:15.576118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.610 [2024-11-19 12:35:15.576186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:10.610 [2024-11-19 12:35:15.576298] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:10.610 [2024-11-19 12:35:15.576360] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:10.610 pt3 00:15:10.610 12:35:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.610 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:10.610 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.610 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.610 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.610 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.610 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.610 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.610 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.610 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.611 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.611 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.611 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.611 12:35:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.611 12:35:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.611 12:35:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.611 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.611 "name": "raid_bdev1", 00:15:10.611 "uuid": "2c58c0ce-4213-40af-8b05-1d408dd19c13", 00:15:10.611 "strip_size_kb": 64, 00:15:10.611 "state": "configuring", 00:15:10.611 "raid_level": "raid5f", 00:15:10.611 "superblock": true, 00:15:10.611 "num_base_bdevs": 4, 00:15:10.611 "num_base_bdevs_discovered": 2, 00:15:10.611 "num_base_bdevs_operational": 3, 00:15:10.611 "base_bdevs_list": [ 00:15:10.611 { 00:15:10.611 "name": null, 00:15:10.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.611 "is_configured": false, 00:15:10.611 "data_offset": 2048, 00:15:10.611 "data_size": 63488 00:15:10.611 }, 00:15:10.611 { 00:15:10.611 "name": "pt2", 00:15:10.611 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.611 "is_configured": true, 00:15:10.611 "data_offset": 2048, 00:15:10.611 "data_size": 63488 00:15:10.611 }, 00:15:10.611 { 00:15:10.611 "name": "pt3", 00:15:10.611 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:10.611 "is_configured": true, 00:15:10.611 "data_offset": 2048, 00:15:10.611 "data_size": 63488 00:15:10.611 }, 00:15:10.611 { 00:15:10.611 "name": null, 00:15:10.611 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:10.611 "is_configured": false, 00:15:10.611 "data_offset": 2048, 00:15:10.611 "data_size": 63488 00:15:10.611 } 00:15:10.611 ] 00:15:10.611 }' 00:15:10.611 12:35:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.611 12:35:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.870 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:10.870 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:10.870 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:15:10.870 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:10.870 12:35:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.870 12:35:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.870 [2024-11-19 12:35:16.022654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:10.870 [2024-11-19 12:35:16.022770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.870 [2024-11-19 12:35:16.022798] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:10.870 [2024-11-19 12:35:16.022811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.870 [2024-11-19 12:35:16.023217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.870 [2024-11-19 12:35:16.023331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:10.870 [2024-11-19 12:35:16.023418] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:10.870 [2024-11-19 12:35:16.023445] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:10.870 [2024-11-19 12:35:16.023545] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:15:10.870 [2024-11-19 12:35:16.023556] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:10.870 [2024-11-19 12:35:16.023792] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:10.870 [2024-11-19 12:35:16.024316] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:15:10.870 [2024-11-19 12:35:16.024328] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:15:10.870 [2024-11-19 12:35:16.024553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.870 pt4 00:15:10.870 12:35:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.870 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:10.870 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.870 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.871 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.871 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.871 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.871 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.871 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.871 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.871 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.871 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.871 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.871 12:35:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.871 12:35:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.871 12:35:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.871 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.871 "name": "raid_bdev1", 00:15:10.871 "uuid": "2c58c0ce-4213-40af-8b05-1d408dd19c13", 00:15:10.871 "strip_size_kb": 64, 00:15:10.871 "state": "online", 00:15:10.871 "raid_level": "raid5f", 00:15:10.871 "superblock": true, 00:15:10.871 "num_base_bdevs": 4, 00:15:10.871 "num_base_bdevs_discovered": 3, 00:15:10.871 "num_base_bdevs_operational": 3, 00:15:10.871 "base_bdevs_list": [ 00:15:10.871 { 00:15:10.871 "name": null, 00:15:10.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.871 "is_configured": false, 00:15:10.871 "data_offset": 2048, 00:15:10.871 "data_size": 63488 00:15:10.871 }, 00:15:10.871 { 00:15:10.871 "name": "pt2", 00:15:10.871 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.871 "is_configured": true, 00:15:10.871 "data_offset": 2048, 00:15:10.871 "data_size": 63488 00:15:10.871 }, 00:15:10.871 { 00:15:10.871 "name": "pt3", 00:15:10.871 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:10.871 "is_configured": true, 00:15:10.871 "data_offset": 2048, 00:15:10.871 "data_size": 63488 00:15:10.871 }, 00:15:10.871 { 00:15:10.871 "name": "pt4", 00:15:10.871 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:10.871 "is_configured": true, 00:15:10.871 "data_offset": 2048, 00:15:10.871 "data_size": 63488 00:15:10.871 } 00:15:10.871 ] 00:15:10.871 }' 00:15:10.871 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.871 12:35:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.440 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:11.440 12:35:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.440 12:35:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.440 [2024-11-19 12:35:16.529831] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:11.440 [2024-11-19 12:35:16.529972] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:11.440 [2024-11-19 12:35:16.530086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:11.440 [2024-11-19 12:35:16.530181] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:11.441 [2024-11-19 12:35:16.530232] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.441 [2024-11-19 12:35:16.589725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:11.441 [2024-11-19 12:35:16.589905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.441 [2024-11-19 12:35:16.589947] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:15:11.441 [2024-11-19 12:35:16.589976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.441 [2024-11-19 12:35:16.592300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.441 [2024-11-19 12:35:16.592390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:11.441 [2024-11-19 12:35:16.592504] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:11.441 [2024-11-19 12:35:16.592572] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:11.441 [2024-11-19 12:35:16.592724] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:11.441 [2024-11-19 12:35:16.592804] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:11.441 [2024-11-19 12:35:16.592852] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:15:11.441 [2024-11-19 12:35:16.592938] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:11.441 [2024-11-19 12:35:16.593083] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:11.441 pt1 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.441 "name": "raid_bdev1", 00:15:11.441 "uuid": "2c58c0ce-4213-40af-8b05-1d408dd19c13", 00:15:11.441 "strip_size_kb": 64, 00:15:11.441 "state": "configuring", 00:15:11.441 "raid_level": "raid5f", 00:15:11.441 "superblock": true, 00:15:11.441 "num_base_bdevs": 4, 00:15:11.441 "num_base_bdevs_discovered": 2, 00:15:11.441 "num_base_bdevs_operational": 3, 00:15:11.441 "base_bdevs_list": [ 00:15:11.441 { 00:15:11.441 "name": null, 00:15:11.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.441 "is_configured": false, 00:15:11.441 "data_offset": 2048, 00:15:11.441 "data_size": 63488 00:15:11.441 }, 00:15:11.441 { 00:15:11.441 "name": "pt2", 00:15:11.441 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.441 "is_configured": true, 00:15:11.441 "data_offset": 2048, 00:15:11.441 "data_size": 63488 00:15:11.441 }, 00:15:11.441 { 00:15:11.441 "name": "pt3", 00:15:11.441 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:11.441 "is_configured": true, 00:15:11.441 "data_offset": 2048, 00:15:11.441 "data_size": 63488 00:15:11.441 }, 00:15:11.441 { 00:15:11.441 "name": null, 00:15:11.441 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:11.441 "is_configured": false, 00:15:11.441 "data_offset": 2048, 00:15:11.441 "data_size": 63488 00:15:11.441 } 00:15:11.441 ] 00:15:11.441 }' 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.441 12:35:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.010 12:35:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:12.010 12:35:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:12.010 12:35:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.010 12:35:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.010 12:35:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.010 12:35:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:12.010 12:35:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:12.010 12:35:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.010 12:35:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.010 [2024-11-19 12:35:17.116860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:12.010 [2024-11-19 12:35:17.116964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.010 [2024-11-19 12:35:17.116988] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:12.010 [2024-11-19 12:35:17.117001] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.010 [2024-11-19 12:35:17.117476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.010 [2024-11-19 12:35:17.117500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:12.010 [2024-11-19 12:35:17.117585] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:12.010 [2024-11-19 12:35:17.117614] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:12.010 [2024-11-19 12:35:17.117729] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:12.010 [2024-11-19 12:35:17.117742] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:12.010 [2024-11-19 12:35:17.118004] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:12.010 [2024-11-19 12:35:17.118544] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:12.010 [2024-11-19 12:35:17.118556] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:12.010 [2024-11-19 12:35:17.118772] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.010 pt4 00:15:12.010 12:35:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.010 12:35:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:12.010 12:35:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.010 12:35:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.010 12:35:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.010 12:35:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.010 12:35:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.010 12:35:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.010 12:35:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.010 12:35:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.010 12:35:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.010 12:35:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.010 12:35:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.010 12:35:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.010 12:35:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.010 12:35:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.010 12:35:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.010 "name": "raid_bdev1", 00:15:12.010 "uuid": "2c58c0ce-4213-40af-8b05-1d408dd19c13", 00:15:12.010 "strip_size_kb": 64, 00:15:12.010 "state": "online", 00:15:12.010 "raid_level": "raid5f", 00:15:12.010 "superblock": true, 00:15:12.010 "num_base_bdevs": 4, 00:15:12.010 "num_base_bdevs_discovered": 3, 00:15:12.010 "num_base_bdevs_operational": 3, 00:15:12.010 "base_bdevs_list": [ 00:15:12.010 { 00:15:12.010 "name": null, 00:15:12.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.010 "is_configured": false, 00:15:12.010 "data_offset": 2048, 00:15:12.010 "data_size": 63488 00:15:12.010 }, 00:15:12.010 { 00:15:12.010 "name": "pt2", 00:15:12.010 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:12.010 "is_configured": true, 00:15:12.010 "data_offset": 2048, 00:15:12.010 "data_size": 63488 00:15:12.010 }, 00:15:12.010 { 00:15:12.010 "name": "pt3", 00:15:12.010 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:12.010 "is_configured": true, 00:15:12.010 "data_offset": 2048, 00:15:12.010 "data_size": 63488 00:15:12.010 }, 00:15:12.010 { 00:15:12.010 "name": "pt4", 00:15:12.010 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:12.010 "is_configured": true, 00:15:12.010 "data_offset": 2048, 00:15:12.010 "data_size": 63488 00:15:12.010 } 00:15:12.010 ] 00:15:12.010 }' 00:15:12.010 12:35:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.010 12:35:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.579 12:35:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:12.579 12:35:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.579 12:35:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.579 12:35:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:12.579 12:35:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.579 12:35:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:12.579 12:35:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:12.579 12:35:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:12.579 12:35:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.579 12:35:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.579 [2024-11-19 12:35:17.648171] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:12.579 12:35:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.579 12:35:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 2c58c0ce-4213-40af-8b05-1d408dd19c13 '!=' 2c58c0ce-4213-40af-8b05-1d408dd19c13 ']' 00:15:12.579 12:35:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 94773 00:15:12.579 12:35:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 94773 ']' 00:15:12.579 12:35:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 94773 00:15:12.579 12:35:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:15:12.580 12:35:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:12.580 12:35:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94773 00:15:12.580 12:35:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:12.580 12:35:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:12.580 12:35:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94773' 00:15:12.580 killing process with pid 94773 00:15:12.580 12:35:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 94773 00:15:12.580 [2024-11-19 12:35:17.716377] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:12.580 [2024-11-19 12:35:17.716484] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.580 12:35:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 94773 00:15:12.580 [2024-11-19 12:35:17.716565] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:12.580 [2024-11-19 12:35:17.716578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:15:12.580 [2024-11-19 12:35:17.760794] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:12.839 12:35:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:12.839 00:15:12.839 real 0m7.296s 00:15:12.839 user 0m12.182s 00:15:12.839 sys 0m1.660s 00:15:12.839 ************************************ 00:15:12.839 END TEST raid5f_superblock_test 00:15:12.839 ************************************ 00:15:12.839 12:35:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:12.839 12:35:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.839 12:35:18 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:12.839 12:35:18 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:15:12.839 12:35:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:12.839 12:35:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:12.839 12:35:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:12.839 ************************************ 00:15:12.839 START TEST raid5f_rebuild_test 00:15:12.839 ************************************ 00:15:12.839 12:35:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:15:12.839 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:12.839 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:12.839 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:12.839 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:12.839 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:12.839 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:12.839 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:12.839 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:12.839 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:12.839 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:12.839 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:12.839 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:12.839 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:12.839 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:12.839 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:12.839 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:12.839 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:12.839 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:12.839 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:13.099 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:13.099 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:13.099 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:13.099 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:13.099 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:13.099 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:13.099 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:13.099 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:13.099 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:13.099 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:13.099 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:13.099 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:13.099 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=95247 00:15:13.099 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:13.099 12:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 95247 00:15:13.099 12:35:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 95247 ']' 00:15:13.099 12:35:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.099 12:35:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:13.099 12:35:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.099 12:35:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:13.099 12:35:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.099 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:13.099 Zero copy mechanism will not be used. 00:15:13.099 [2024-11-19 12:35:18.196840] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:13.099 [2024-11-19 12:35:18.197013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95247 ] 00:15:13.358 [2024-11-19 12:35:18.365939] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.358 [2024-11-19 12:35:18.418812] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.358 [2024-11-19 12:35:18.460386] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:13.358 [2024-11-19 12:35:18.460528] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.927 BaseBdev1_malloc 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.927 [2024-11-19 12:35:19.078955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:13.927 [2024-11-19 12:35:19.079124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.927 [2024-11-19 12:35:19.079169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:13.927 [2024-11-19 12:35:19.079213] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.927 [2024-11-19 12:35:19.081447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.927 [2024-11-19 12:35:19.081533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:13.927 BaseBdev1 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.927 BaseBdev2_malloc 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.927 [2024-11-19 12:35:19.118695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:13.927 [2024-11-19 12:35:19.118907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.927 [2024-11-19 12:35:19.118946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:13.927 [2024-11-19 12:35:19.118957] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.927 [2024-11-19 12:35:19.121261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.927 [2024-11-19 12:35:19.121298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:13.927 BaseBdev2 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.927 BaseBdev3_malloc 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.927 [2024-11-19 12:35:19.147887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:13.927 [2024-11-19 12:35:19.147973] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.927 [2024-11-19 12:35:19.148001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:13.927 [2024-11-19 12:35:19.148010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.927 [2024-11-19 12:35:19.150256] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.927 [2024-11-19 12:35:19.150337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:13.927 BaseBdev3 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.927 BaseBdev4_malloc 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.927 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.927 [2024-11-19 12:35:19.176659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:13.927 [2024-11-19 12:35:19.176739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.928 [2024-11-19 12:35:19.176784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:13.928 [2024-11-19 12:35:19.176793] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.928 [2024-11-19 12:35:19.178904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.928 [2024-11-19 12:35:19.178943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:13.928 BaseBdev4 00:15:13.928 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.928 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:13.928 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.928 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.186 spare_malloc 00:15:14.186 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.186 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:14.186 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.186 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.186 spare_delay 00:15:14.186 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.186 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:14.186 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.186 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.186 [2024-11-19 12:35:19.217520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:14.186 [2024-11-19 12:35:19.217605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.186 [2024-11-19 12:35:19.217650] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:14.186 [2024-11-19 12:35:19.217660] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.186 [2024-11-19 12:35:19.219909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.186 [2024-11-19 12:35:19.220035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:14.186 spare 00:15:14.186 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.186 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:14.186 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.186 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.186 [2024-11-19 12:35:19.229616] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:14.186 [2024-11-19 12:35:19.231506] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:14.186 [2024-11-19 12:35:19.231676] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:14.186 [2024-11-19 12:35:19.231720] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:14.186 [2024-11-19 12:35:19.231838] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:14.186 [2024-11-19 12:35:19.231848] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:14.186 [2024-11-19 12:35:19.232141] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:14.186 [2024-11-19 12:35:19.232581] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:14.186 [2024-11-19 12:35:19.232595] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:14.186 [2024-11-19 12:35:19.232762] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.186 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.186 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:14.186 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.186 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.186 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.186 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.186 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:14.186 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.186 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.186 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.186 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.186 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.186 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.186 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.186 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.186 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.186 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.186 "name": "raid_bdev1", 00:15:14.186 "uuid": "ddc780c5-9cc4-4b57-81d2-4e05a22ea9b7", 00:15:14.186 "strip_size_kb": 64, 00:15:14.186 "state": "online", 00:15:14.186 "raid_level": "raid5f", 00:15:14.186 "superblock": false, 00:15:14.186 "num_base_bdevs": 4, 00:15:14.186 "num_base_bdevs_discovered": 4, 00:15:14.186 "num_base_bdevs_operational": 4, 00:15:14.186 "base_bdevs_list": [ 00:15:14.186 { 00:15:14.186 "name": "BaseBdev1", 00:15:14.186 "uuid": "c573dc1a-5cd4-598b-b7b9-b9812a0cfdc5", 00:15:14.186 "is_configured": true, 00:15:14.186 "data_offset": 0, 00:15:14.186 "data_size": 65536 00:15:14.186 }, 00:15:14.186 { 00:15:14.186 "name": "BaseBdev2", 00:15:14.186 "uuid": "26202c78-e768-550c-bd06-237746990f5a", 00:15:14.186 "is_configured": true, 00:15:14.186 "data_offset": 0, 00:15:14.186 "data_size": 65536 00:15:14.186 }, 00:15:14.186 { 00:15:14.186 "name": "BaseBdev3", 00:15:14.186 "uuid": "66d20772-92d8-5336-90ad-ebd8bce22978", 00:15:14.186 "is_configured": true, 00:15:14.186 "data_offset": 0, 00:15:14.186 "data_size": 65536 00:15:14.186 }, 00:15:14.186 { 00:15:14.186 "name": "BaseBdev4", 00:15:14.186 "uuid": "74e7cca5-1850-5372-9b3f-dcf6a4d41af6", 00:15:14.186 "is_configured": true, 00:15:14.186 "data_offset": 0, 00:15:14.186 "data_size": 65536 00:15:14.186 } 00:15:14.186 ] 00:15:14.186 }' 00:15:14.186 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.186 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.445 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:14.445 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:14.446 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.446 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.446 [2024-11-19 12:35:19.701972] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:14.704 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.704 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:15:14.704 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.704 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.704 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.705 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:14.705 12:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.705 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:14.705 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:14.705 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:14.705 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:14.705 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:14.705 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:14.705 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:14.705 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:14.705 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:14.705 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:14.705 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:14.705 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:14.705 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:14.705 12:35:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:14.964 [2024-11-19 12:35:19.985324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:14.964 /dev/nbd0 00:15:14.964 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:14.964 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:14.964 12:35:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:14.964 12:35:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:14.964 12:35:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:14.964 12:35:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:14.964 12:35:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:14.964 12:35:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:14.964 12:35:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:14.964 12:35:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:14.964 12:35:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.964 1+0 records in 00:15:14.964 1+0 records out 00:15:14.964 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000648783 s, 6.3 MB/s 00:15:14.964 12:35:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.964 12:35:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:14.964 12:35:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.964 12:35:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:14.964 12:35:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:14.964 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:14.964 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:14.964 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:14.964 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:15:14.964 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:15:14.964 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:15:15.532 512+0 records in 00:15:15.532 512+0 records out 00:15:15.532 100663296 bytes (101 MB, 96 MiB) copied, 0.448063 s, 225 MB/s 00:15:15.532 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:15.532 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:15.532 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:15.532 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:15.532 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:15.532 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:15.532 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:15.532 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:15.532 [2024-11-19 12:35:20.735559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.532 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:15.532 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:15.532 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:15.532 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:15.532 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:15.532 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:15.532 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:15.532 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:15.532 12:35:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.532 12:35:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.532 [2024-11-19 12:35:20.751641] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:15.532 12:35:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.532 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:15.532 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.532 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.532 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.532 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.533 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:15.533 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.533 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.533 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.533 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.533 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.533 12:35:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.533 12:35:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.533 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.533 12:35:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.792 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.792 "name": "raid_bdev1", 00:15:15.792 "uuid": "ddc780c5-9cc4-4b57-81d2-4e05a22ea9b7", 00:15:15.792 "strip_size_kb": 64, 00:15:15.792 "state": "online", 00:15:15.792 "raid_level": "raid5f", 00:15:15.792 "superblock": false, 00:15:15.792 "num_base_bdevs": 4, 00:15:15.792 "num_base_bdevs_discovered": 3, 00:15:15.792 "num_base_bdevs_operational": 3, 00:15:15.792 "base_bdevs_list": [ 00:15:15.792 { 00:15:15.792 "name": null, 00:15:15.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.792 "is_configured": false, 00:15:15.792 "data_offset": 0, 00:15:15.792 "data_size": 65536 00:15:15.792 }, 00:15:15.792 { 00:15:15.792 "name": "BaseBdev2", 00:15:15.792 "uuid": "26202c78-e768-550c-bd06-237746990f5a", 00:15:15.792 "is_configured": true, 00:15:15.792 "data_offset": 0, 00:15:15.792 "data_size": 65536 00:15:15.792 }, 00:15:15.792 { 00:15:15.792 "name": "BaseBdev3", 00:15:15.792 "uuid": "66d20772-92d8-5336-90ad-ebd8bce22978", 00:15:15.792 "is_configured": true, 00:15:15.792 "data_offset": 0, 00:15:15.792 "data_size": 65536 00:15:15.792 }, 00:15:15.792 { 00:15:15.792 "name": "BaseBdev4", 00:15:15.792 "uuid": "74e7cca5-1850-5372-9b3f-dcf6a4d41af6", 00:15:15.792 "is_configured": true, 00:15:15.792 "data_offset": 0, 00:15:15.792 "data_size": 65536 00:15:15.792 } 00:15:15.792 ] 00:15:15.792 }' 00:15:15.792 12:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.792 12:35:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.051 12:35:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:16.051 12:35:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.051 12:35:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.051 [2024-11-19 12:35:21.246985] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:16.051 [2024-11-19 12:35:21.250487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:15:16.051 [2024-11-19 12:35:21.252797] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:16.051 12:35:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.051 12:35:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.429 "name": "raid_bdev1", 00:15:17.429 "uuid": "ddc780c5-9cc4-4b57-81d2-4e05a22ea9b7", 00:15:17.429 "strip_size_kb": 64, 00:15:17.429 "state": "online", 00:15:17.429 "raid_level": "raid5f", 00:15:17.429 "superblock": false, 00:15:17.429 "num_base_bdevs": 4, 00:15:17.429 "num_base_bdevs_discovered": 4, 00:15:17.429 "num_base_bdevs_operational": 4, 00:15:17.429 "process": { 00:15:17.429 "type": "rebuild", 00:15:17.429 "target": "spare", 00:15:17.429 "progress": { 00:15:17.429 "blocks": 19200, 00:15:17.429 "percent": 9 00:15:17.429 } 00:15:17.429 }, 00:15:17.429 "base_bdevs_list": [ 00:15:17.429 { 00:15:17.429 "name": "spare", 00:15:17.429 "uuid": "3d5c77e7-222e-5d91-8984-db3ce52aaebc", 00:15:17.429 "is_configured": true, 00:15:17.429 "data_offset": 0, 00:15:17.429 "data_size": 65536 00:15:17.429 }, 00:15:17.429 { 00:15:17.429 "name": "BaseBdev2", 00:15:17.429 "uuid": "26202c78-e768-550c-bd06-237746990f5a", 00:15:17.429 "is_configured": true, 00:15:17.429 "data_offset": 0, 00:15:17.429 "data_size": 65536 00:15:17.429 }, 00:15:17.429 { 00:15:17.429 "name": "BaseBdev3", 00:15:17.429 "uuid": "66d20772-92d8-5336-90ad-ebd8bce22978", 00:15:17.429 "is_configured": true, 00:15:17.429 "data_offset": 0, 00:15:17.429 "data_size": 65536 00:15:17.429 }, 00:15:17.429 { 00:15:17.429 "name": "BaseBdev4", 00:15:17.429 "uuid": "74e7cca5-1850-5372-9b3f-dcf6a4d41af6", 00:15:17.429 "is_configured": true, 00:15:17.429 "data_offset": 0, 00:15:17.429 "data_size": 65536 00:15:17.429 } 00:15:17.429 ] 00:15:17.429 }' 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.429 [2024-11-19 12:35:22.419737] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:17.429 [2024-11-19 12:35:22.461270] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:17.429 [2024-11-19 12:35:22.461356] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.429 [2024-11-19 12:35:22.461379] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:17.429 [2024-11-19 12:35:22.461387] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.429 "name": "raid_bdev1", 00:15:17.429 "uuid": "ddc780c5-9cc4-4b57-81d2-4e05a22ea9b7", 00:15:17.429 "strip_size_kb": 64, 00:15:17.429 "state": "online", 00:15:17.429 "raid_level": "raid5f", 00:15:17.429 "superblock": false, 00:15:17.429 "num_base_bdevs": 4, 00:15:17.429 "num_base_bdevs_discovered": 3, 00:15:17.429 "num_base_bdevs_operational": 3, 00:15:17.429 "base_bdevs_list": [ 00:15:17.429 { 00:15:17.429 "name": null, 00:15:17.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.429 "is_configured": false, 00:15:17.429 "data_offset": 0, 00:15:17.429 "data_size": 65536 00:15:17.429 }, 00:15:17.429 { 00:15:17.429 "name": "BaseBdev2", 00:15:17.429 "uuid": "26202c78-e768-550c-bd06-237746990f5a", 00:15:17.429 "is_configured": true, 00:15:17.429 "data_offset": 0, 00:15:17.429 "data_size": 65536 00:15:17.429 }, 00:15:17.429 { 00:15:17.429 "name": "BaseBdev3", 00:15:17.429 "uuid": "66d20772-92d8-5336-90ad-ebd8bce22978", 00:15:17.429 "is_configured": true, 00:15:17.429 "data_offset": 0, 00:15:17.429 "data_size": 65536 00:15:17.429 }, 00:15:17.429 { 00:15:17.429 "name": "BaseBdev4", 00:15:17.429 "uuid": "74e7cca5-1850-5372-9b3f-dcf6a4d41af6", 00:15:17.429 "is_configured": true, 00:15:17.429 "data_offset": 0, 00:15:17.429 "data_size": 65536 00:15:17.429 } 00:15:17.429 ] 00:15:17.429 }' 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.429 12:35:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.689 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:17.689 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.689 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:17.689 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:17.689 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.689 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.689 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.689 12:35:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.689 12:35:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.948 12:35:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.948 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.948 "name": "raid_bdev1", 00:15:17.948 "uuid": "ddc780c5-9cc4-4b57-81d2-4e05a22ea9b7", 00:15:17.948 "strip_size_kb": 64, 00:15:17.948 "state": "online", 00:15:17.948 "raid_level": "raid5f", 00:15:17.948 "superblock": false, 00:15:17.948 "num_base_bdevs": 4, 00:15:17.948 "num_base_bdevs_discovered": 3, 00:15:17.948 "num_base_bdevs_operational": 3, 00:15:17.948 "base_bdevs_list": [ 00:15:17.948 { 00:15:17.948 "name": null, 00:15:17.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.948 "is_configured": false, 00:15:17.948 "data_offset": 0, 00:15:17.948 "data_size": 65536 00:15:17.948 }, 00:15:17.948 { 00:15:17.948 "name": "BaseBdev2", 00:15:17.948 "uuid": "26202c78-e768-550c-bd06-237746990f5a", 00:15:17.948 "is_configured": true, 00:15:17.948 "data_offset": 0, 00:15:17.948 "data_size": 65536 00:15:17.948 }, 00:15:17.948 { 00:15:17.948 "name": "BaseBdev3", 00:15:17.948 "uuid": "66d20772-92d8-5336-90ad-ebd8bce22978", 00:15:17.948 "is_configured": true, 00:15:17.948 "data_offset": 0, 00:15:17.948 "data_size": 65536 00:15:17.948 }, 00:15:17.948 { 00:15:17.948 "name": "BaseBdev4", 00:15:17.948 "uuid": "74e7cca5-1850-5372-9b3f-dcf6a4d41af6", 00:15:17.948 "is_configured": true, 00:15:17.948 "data_offset": 0, 00:15:17.948 "data_size": 65536 00:15:17.949 } 00:15:17.949 ] 00:15:17.949 }' 00:15:17.949 12:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.949 12:35:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:17.949 12:35:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.949 12:35:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:17.949 12:35:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:17.949 12:35:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.949 12:35:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.949 [2024-11-19 12:35:23.082282] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:17.949 [2024-11-19 12:35:23.085650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:17.949 [2024-11-19 12:35:23.087943] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:17.949 12:35:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.949 12:35:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:18.891 12:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:18.891 12:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.891 12:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:18.891 12:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:18.891 12:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.891 12:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.891 12:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.891 12:35:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.891 12:35:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.891 12:35:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.151 12:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.151 "name": "raid_bdev1", 00:15:19.151 "uuid": "ddc780c5-9cc4-4b57-81d2-4e05a22ea9b7", 00:15:19.151 "strip_size_kb": 64, 00:15:19.151 "state": "online", 00:15:19.151 "raid_level": "raid5f", 00:15:19.151 "superblock": false, 00:15:19.151 "num_base_bdevs": 4, 00:15:19.151 "num_base_bdevs_discovered": 4, 00:15:19.151 "num_base_bdevs_operational": 4, 00:15:19.151 "process": { 00:15:19.151 "type": "rebuild", 00:15:19.151 "target": "spare", 00:15:19.151 "progress": { 00:15:19.151 "blocks": 19200, 00:15:19.151 "percent": 9 00:15:19.151 } 00:15:19.151 }, 00:15:19.151 "base_bdevs_list": [ 00:15:19.151 { 00:15:19.151 "name": "spare", 00:15:19.151 "uuid": "3d5c77e7-222e-5d91-8984-db3ce52aaebc", 00:15:19.151 "is_configured": true, 00:15:19.151 "data_offset": 0, 00:15:19.151 "data_size": 65536 00:15:19.151 }, 00:15:19.151 { 00:15:19.151 "name": "BaseBdev2", 00:15:19.151 "uuid": "26202c78-e768-550c-bd06-237746990f5a", 00:15:19.151 "is_configured": true, 00:15:19.151 "data_offset": 0, 00:15:19.151 "data_size": 65536 00:15:19.151 }, 00:15:19.151 { 00:15:19.151 "name": "BaseBdev3", 00:15:19.151 "uuid": "66d20772-92d8-5336-90ad-ebd8bce22978", 00:15:19.151 "is_configured": true, 00:15:19.151 "data_offset": 0, 00:15:19.151 "data_size": 65536 00:15:19.151 }, 00:15:19.151 { 00:15:19.151 "name": "BaseBdev4", 00:15:19.151 "uuid": "74e7cca5-1850-5372-9b3f-dcf6a4d41af6", 00:15:19.151 "is_configured": true, 00:15:19.151 "data_offset": 0, 00:15:19.151 "data_size": 65536 00:15:19.151 } 00:15:19.151 ] 00:15:19.151 }' 00:15:19.151 12:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.151 12:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:19.151 12:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.151 12:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:19.151 12:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:19.151 12:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:19.151 12:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:19.151 12:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=517 00:15:19.151 12:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:19.151 12:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:19.151 12:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.151 12:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:19.151 12:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:19.151 12:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.151 12:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.151 12:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.151 12:35:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.151 12:35:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.151 12:35:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.151 12:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.151 "name": "raid_bdev1", 00:15:19.151 "uuid": "ddc780c5-9cc4-4b57-81d2-4e05a22ea9b7", 00:15:19.151 "strip_size_kb": 64, 00:15:19.151 "state": "online", 00:15:19.151 "raid_level": "raid5f", 00:15:19.151 "superblock": false, 00:15:19.151 "num_base_bdevs": 4, 00:15:19.151 "num_base_bdevs_discovered": 4, 00:15:19.151 "num_base_bdevs_operational": 4, 00:15:19.151 "process": { 00:15:19.151 "type": "rebuild", 00:15:19.151 "target": "spare", 00:15:19.151 "progress": { 00:15:19.151 "blocks": 21120, 00:15:19.151 "percent": 10 00:15:19.151 } 00:15:19.151 }, 00:15:19.151 "base_bdevs_list": [ 00:15:19.151 { 00:15:19.151 "name": "spare", 00:15:19.151 "uuid": "3d5c77e7-222e-5d91-8984-db3ce52aaebc", 00:15:19.151 "is_configured": true, 00:15:19.151 "data_offset": 0, 00:15:19.151 "data_size": 65536 00:15:19.151 }, 00:15:19.151 { 00:15:19.151 "name": "BaseBdev2", 00:15:19.151 "uuid": "26202c78-e768-550c-bd06-237746990f5a", 00:15:19.151 "is_configured": true, 00:15:19.151 "data_offset": 0, 00:15:19.151 "data_size": 65536 00:15:19.151 }, 00:15:19.151 { 00:15:19.151 "name": "BaseBdev3", 00:15:19.151 "uuid": "66d20772-92d8-5336-90ad-ebd8bce22978", 00:15:19.151 "is_configured": true, 00:15:19.151 "data_offset": 0, 00:15:19.151 "data_size": 65536 00:15:19.151 }, 00:15:19.151 { 00:15:19.151 "name": "BaseBdev4", 00:15:19.151 "uuid": "74e7cca5-1850-5372-9b3f-dcf6a4d41af6", 00:15:19.151 "is_configured": true, 00:15:19.151 "data_offset": 0, 00:15:19.151 "data_size": 65536 00:15:19.151 } 00:15:19.151 ] 00:15:19.151 }' 00:15:19.151 12:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.151 12:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:19.151 12:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.151 12:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:19.151 12:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:20.528 12:35:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:20.528 12:35:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:20.528 12:35:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.528 12:35:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:20.528 12:35:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:20.528 12:35:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.528 12:35:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.528 12:35:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.528 12:35:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.528 12:35:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.528 12:35:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.528 12:35:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.528 "name": "raid_bdev1", 00:15:20.528 "uuid": "ddc780c5-9cc4-4b57-81d2-4e05a22ea9b7", 00:15:20.528 "strip_size_kb": 64, 00:15:20.528 "state": "online", 00:15:20.528 "raid_level": "raid5f", 00:15:20.528 "superblock": false, 00:15:20.528 "num_base_bdevs": 4, 00:15:20.528 "num_base_bdevs_discovered": 4, 00:15:20.528 "num_base_bdevs_operational": 4, 00:15:20.528 "process": { 00:15:20.528 "type": "rebuild", 00:15:20.528 "target": "spare", 00:15:20.528 "progress": { 00:15:20.528 "blocks": 44160, 00:15:20.528 "percent": 22 00:15:20.528 } 00:15:20.528 }, 00:15:20.528 "base_bdevs_list": [ 00:15:20.528 { 00:15:20.528 "name": "spare", 00:15:20.528 "uuid": "3d5c77e7-222e-5d91-8984-db3ce52aaebc", 00:15:20.528 "is_configured": true, 00:15:20.528 "data_offset": 0, 00:15:20.529 "data_size": 65536 00:15:20.529 }, 00:15:20.529 { 00:15:20.529 "name": "BaseBdev2", 00:15:20.529 "uuid": "26202c78-e768-550c-bd06-237746990f5a", 00:15:20.529 "is_configured": true, 00:15:20.529 "data_offset": 0, 00:15:20.529 "data_size": 65536 00:15:20.529 }, 00:15:20.529 { 00:15:20.529 "name": "BaseBdev3", 00:15:20.529 "uuid": "66d20772-92d8-5336-90ad-ebd8bce22978", 00:15:20.529 "is_configured": true, 00:15:20.529 "data_offset": 0, 00:15:20.529 "data_size": 65536 00:15:20.529 }, 00:15:20.529 { 00:15:20.529 "name": "BaseBdev4", 00:15:20.529 "uuid": "74e7cca5-1850-5372-9b3f-dcf6a4d41af6", 00:15:20.529 "is_configured": true, 00:15:20.529 "data_offset": 0, 00:15:20.529 "data_size": 65536 00:15:20.529 } 00:15:20.529 ] 00:15:20.529 }' 00:15:20.529 12:35:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.529 12:35:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:20.529 12:35:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.529 12:35:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:20.529 12:35:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:21.466 12:35:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:21.466 12:35:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.467 12:35:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.467 12:35:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.467 12:35:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.467 12:35:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.467 12:35:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.467 12:35:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.467 12:35:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.467 12:35:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.467 12:35:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.467 12:35:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.467 "name": "raid_bdev1", 00:15:21.467 "uuid": "ddc780c5-9cc4-4b57-81d2-4e05a22ea9b7", 00:15:21.467 "strip_size_kb": 64, 00:15:21.467 "state": "online", 00:15:21.467 "raid_level": "raid5f", 00:15:21.467 "superblock": false, 00:15:21.467 "num_base_bdevs": 4, 00:15:21.467 "num_base_bdevs_discovered": 4, 00:15:21.467 "num_base_bdevs_operational": 4, 00:15:21.467 "process": { 00:15:21.467 "type": "rebuild", 00:15:21.467 "target": "spare", 00:15:21.467 "progress": { 00:15:21.467 "blocks": 65280, 00:15:21.467 "percent": 33 00:15:21.467 } 00:15:21.467 }, 00:15:21.467 "base_bdevs_list": [ 00:15:21.467 { 00:15:21.467 "name": "spare", 00:15:21.467 "uuid": "3d5c77e7-222e-5d91-8984-db3ce52aaebc", 00:15:21.467 "is_configured": true, 00:15:21.467 "data_offset": 0, 00:15:21.467 "data_size": 65536 00:15:21.467 }, 00:15:21.467 { 00:15:21.467 "name": "BaseBdev2", 00:15:21.467 "uuid": "26202c78-e768-550c-bd06-237746990f5a", 00:15:21.467 "is_configured": true, 00:15:21.467 "data_offset": 0, 00:15:21.467 "data_size": 65536 00:15:21.467 }, 00:15:21.467 { 00:15:21.467 "name": "BaseBdev3", 00:15:21.467 "uuid": "66d20772-92d8-5336-90ad-ebd8bce22978", 00:15:21.467 "is_configured": true, 00:15:21.467 "data_offset": 0, 00:15:21.467 "data_size": 65536 00:15:21.467 }, 00:15:21.467 { 00:15:21.467 "name": "BaseBdev4", 00:15:21.467 "uuid": "74e7cca5-1850-5372-9b3f-dcf6a4d41af6", 00:15:21.467 "is_configured": true, 00:15:21.467 "data_offset": 0, 00:15:21.467 "data_size": 65536 00:15:21.467 } 00:15:21.467 ] 00:15:21.467 }' 00:15:21.467 12:35:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.467 12:35:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:21.467 12:35:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.467 12:35:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.467 12:35:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:22.845 12:35:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:22.845 12:35:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:22.845 12:35:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.845 12:35:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:22.845 12:35:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:22.845 12:35:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.845 12:35:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.845 12:35:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.845 12:35:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.845 12:35:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.845 12:35:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.845 12:35:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.845 "name": "raid_bdev1", 00:15:22.845 "uuid": "ddc780c5-9cc4-4b57-81d2-4e05a22ea9b7", 00:15:22.845 "strip_size_kb": 64, 00:15:22.845 "state": "online", 00:15:22.845 "raid_level": "raid5f", 00:15:22.845 "superblock": false, 00:15:22.845 "num_base_bdevs": 4, 00:15:22.845 "num_base_bdevs_discovered": 4, 00:15:22.845 "num_base_bdevs_operational": 4, 00:15:22.845 "process": { 00:15:22.845 "type": "rebuild", 00:15:22.845 "target": "spare", 00:15:22.845 "progress": { 00:15:22.845 "blocks": 86400, 00:15:22.845 "percent": 43 00:15:22.845 } 00:15:22.845 }, 00:15:22.845 "base_bdevs_list": [ 00:15:22.845 { 00:15:22.845 "name": "spare", 00:15:22.845 "uuid": "3d5c77e7-222e-5d91-8984-db3ce52aaebc", 00:15:22.845 "is_configured": true, 00:15:22.845 "data_offset": 0, 00:15:22.845 "data_size": 65536 00:15:22.845 }, 00:15:22.845 { 00:15:22.845 "name": "BaseBdev2", 00:15:22.845 "uuid": "26202c78-e768-550c-bd06-237746990f5a", 00:15:22.845 "is_configured": true, 00:15:22.845 "data_offset": 0, 00:15:22.845 "data_size": 65536 00:15:22.845 }, 00:15:22.845 { 00:15:22.845 "name": "BaseBdev3", 00:15:22.845 "uuid": "66d20772-92d8-5336-90ad-ebd8bce22978", 00:15:22.845 "is_configured": true, 00:15:22.845 "data_offset": 0, 00:15:22.845 "data_size": 65536 00:15:22.845 }, 00:15:22.845 { 00:15:22.845 "name": "BaseBdev4", 00:15:22.845 "uuid": "74e7cca5-1850-5372-9b3f-dcf6a4d41af6", 00:15:22.845 "is_configured": true, 00:15:22.845 "data_offset": 0, 00:15:22.845 "data_size": 65536 00:15:22.845 } 00:15:22.845 ] 00:15:22.845 }' 00:15:22.845 12:35:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.845 12:35:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:22.845 12:35:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.845 12:35:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:22.845 12:35:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:23.782 12:35:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:23.782 12:35:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.782 12:35:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.782 12:35:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:23.782 12:35:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:23.782 12:35:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.782 12:35:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.782 12:35:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.782 12:35:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.783 12:35:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.783 12:35:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.783 12:35:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.783 "name": "raid_bdev1", 00:15:23.783 "uuid": "ddc780c5-9cc4-4b57-81d2-4e05a22ea9b7", 00:15:23.783 "strip_size_kb": 64, 00:15:23.783 "state": "online", 00:15:23.783 "raid_level": "raid5f", 00:15:23.783 "superblock": false, 00:15:23.783 "num_base_bdevs": 4, 00:15:23.783 "num_base_bdevs_discovered": 4, 00:15:23.783 "num_base_bdevs_operational": 4, 00:15:23.783 "process": { 00:15:23.783 "type": "rebuild", 00:15:23.783 "target": "spare", 00:15:23.783 "progress": { 00:15:23.783 "blocks": 109440, 00:15:23.783 "percent": 55 00:15:23.783 } 00:15:23.783 }, 00:15:23.783 "base_bdevs_list": [ 00:15:23.783 { 00:15:23.783 "name": "spare", 00:15:23.783 "uuid": "3d5c77e7-222e-5d91-8984-db3ce52aaebc", 00:15:23.783 "is_configured": true, 00:15:23.783 "data_offset": 0, 00:15:23.783 "data_size": 65536 00:15:23.783 }, 00:15:23.783 { 00:15:23.783 "name": "BaseBdev2", 00:15:23.783 "uuid": "26202c78-e768-550c-bd06-237746990f5a", 00:15:23.783 "is_configured": true, 00:15:23.783 "data_offset": 0, 00:15:23.783 "data_size": 65536 00:15:23.783 }, 00:15:23.783 { 00:15:23.783 "name": "BaseBdev3", 00:15:23.783 "uuid": "66d20772-92d8-5336-90ad-ebd8bce22978", 00:15:23.783 "is_configured": true, 00:15:23.783 "data_offset": 0, 00:15:23.783 "data_size": 65536 00:15:23.783 }, 00:15:23.783 { 00:15:23.783 "name": "BaseBdev4", 00:15:23.783 "uuid": "74e7cca5-1850-5372-9b3f-dcf6a4d41af6", 00:15:23.783 "is_configured": true, 00:15:23.783 "data_offset": 0, 00:15:23.783 "data_size": 65536 00:15:23.783 } 00:15:23.783 ] 00:15:23.783 }' 00:15:23.783 12:35:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.783 12:35:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:23.783 12:35:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.783 12:35:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:23.783 12:35:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:25.161 12:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:25.161 12:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.161 12:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.161 12:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.161 12:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.161 12:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.161 12:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.161 12:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.161 12:35:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.161 12:35:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.161 12:35:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.161 12:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.161 "name": "raid_bdev1", 00:15:25.161 "uuid": "ddc780c5-9cc4-4b57-81d2-4e05a22ea9b7", 00:15:25.161 "strip_size_kb": 64, 00:15:25.161 "state": "online", 00:15:25.161 "raid_level": "raid5f", 00:15:25.161 "superblock": false, 00:15:25.161 "num_base_bdevs": 4, 00:15:25.161 "num_base_bdevs_discovered": 4, 00:15:25.161 "num_base_bdevs_operational": 4, 00:15:25.161 "process": { 00:15:25.161 "type": "rebuild", 00:15:25.161 "target": "spare", 00:15:25.161 "progress": { 00:15:25.161 "blocks": 130560, 00:15:25.161 "percent": 66 00:15:25.161 } 00:15:25.161 }, 00:15:25.161 "base_bdevs_list": [ 00:15:25.161 { 00:15:25.161 "name": "spare", 00:15:25.161 "uuid": "3d5c77e7-222e-5d91-8984-db3ce52aaebc", 00:15:25.161 "is_configured": true, 00:15:25.161 "data_offset": 0, 00:15:25.161 "data_size": 65536 00:15:25.161 }, 00:15:25.161 { 00:15:25.161 "name": "BaseBdev2", 00:15:25.161 "uuid": "26202c78-e768-550c-bd06-237746990f5a", 00:15:25.161 "is_configured": true, 00:15:25.161 "data_offset": 0, 00:15:25.161 "data_size": 65536 00:15:25.161 }, 00:15:25.161 { 00:15:25.161 "name": "BaseBdev3", 00:15:25.161 "uuid": "66d20772-92d8-5336-90ad-ebd8bce22978", 00:15:25.161 "is_configured": true, 00:15:25.161 "data_offset": 0, 00:15:25.161 "data_size": 65536 00:15:25.161 }, 00:15:25.161 { 00:15:25.161 "name": "BaseBdev4", 00:15:25.161 "uuid": "74e7cca5-1850-5372-9b3f-dcf6a4d41af6", 00:15:25.161 "is_configured": true, 00:15:25.161 "data_offset": 0, 00:15:25.161 "data_size": 65536 00:15:25.161 } 00:15:25.161 ] 00:15:25.161 }' 00:15:25.161 12:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.161 12:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.162 12:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.162 12:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.162 12:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:26.099 12:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:26.099 12:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:26.099 12:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.099 12:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:26.100 12:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:26.100 12:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.100 12:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.100 12:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.100 12:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.100 12:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.100 12:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.100 12:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.100 "name": "raid_bdev1", 00:15:26.100 "uuid": "ddc780c5-9cc4-4b57-81d2-4e05a22ea9b7", 00:15:26.100 "strip_size_kb": 64, 00:15:26.100 "state": "online", 00:15:26.100 "raid_level": "raid5f", 00:15:26.100 "superblock": false, 00:15:26.100 "num_base_bdevs": 4, 00:15:26.100 "num_base_bdevs_discovered": 4, 00:15:26.100 "num_base_bdevs_operational": 4, 00:15:26.100 "process": { 00:15:26.100 "type": "rebuild", 00:15:26.100 "target": "spare", 00:15:26.100 "progress": { 00:15:26.100 "blocks": 153600, 00:15:26.100 "percent": 78 00:15:26.100 } 00:15:26.100 }, 00:15:26.100 "base_bdevs_list": [ 00:15:26.100 { 00:15:26.100 "name": "spare", 00:15:26.100 "uuid": "3d5c77e7-222e-5d91-8984-db3ce52aaebc", 00:15:26.100 "is_configured": true, 00:15:26.100 "data_offset": 0, 00:15:26.100 "data_size": 65536 00:15:26.100 }, 00:15:26.100 { 00:15:26.100 "name": "BaseBdev2", 00:15:26.100 "uuid": "26202c78-e768-550c-bd06-237746990f5a", 00:15:26.100 "is_configured": true, 00:15:26.100 "data_offset": 0, 00:15:26.100 "data_size": 65536 00:15:26.100 }, 00:15:26.100 { 00:15:26.100 "name": "BaseBdev3", 00:15:26.100 "uuid": "66d20772-92d8-5336-90ad-ebd8bce22978", 00:15:26.100 "is_configured": true, 00:15:26.100 "data_offset": 0, 00:15:26.100 "data_size": 65536 00:15:26.100 }, 00:15:26.100 { 00:15:26.100 "name": "BaseBdev4", 00:15:26.100 "uuid": "74e7cca5-1850-5372-9b3f-dcf6a4d41af6", 00:15:26.100 "is_configured": true, 00:15:26.100 "data_offset": 0, 00:15:26.100 "data_size": 65536 00:15:26.100 } 00:15:26.100 ] 00:15:26.100 }' 00:15:26.100 12:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.100 12:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:26.100 12:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.100 12:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:26.100 12:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:27.479 12:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:27.479 12:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.479 12:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.479 12:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.479 12:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.479 12:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.479 12:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.479 12:35:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.479 12:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.479 12:35:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.479 12:35:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.479 12:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.479 "name": "raid_bdev1", 00:15:27.479 "uuid": "ddc780c5-9cc4-4b57-81d2-4e05a22ea9b7", 00:15:27.479 "strip_size_kb": 64, 00:15:27.479 "state": "online", 00:15:27.479 "raid_level": "raid5f", 00:15:27.479 "superblock": false, 00:15:27.479 "num_base_bdevs": 4, 00:15:27.479 "num_base_bdevs_discovered": 4, 00:15:27.479 "num_base_bdevs_operational": 4, 00:15:27.479 "process": { 00:15:27.479 "type": "rebuild", 00:15:27.479 "target": "spare", 00:15:27.479 "progress": { 00:15:27.479 "blocks": 174720, 00:15:27.479 "percent": 88 00:15:27.479 } 00:15:27.479 }, 00:15:27.479 "base_bdevs_list": [ 00:15:27.479 { 00:15:27.479 "name": "spare", 00:15:27.479 "uuid": "3d5c77e7-222e-5d91-8984-db3ce52aaebc", 00:15:27.479 "is_configured": true, 00:15:27.479 "data_offset": 0, 00:15:27.479 "data_size": 65536 00:15:27.479 }, 00:15:27.479 { 00:15:27.479 "name": "BaseBdev2", 00:15:27.479 "uuid": "26202c78-e768-550c-bd06-237746990f5a", 00:15:27.479 "is_configured": true, 00:15:27.479 "data_offset": 0, 00:15:27.479 "data_size": 65536 00:15:27.479 }, 00:15:27.479 { 00:15:27.479 "name": "BaseBdev3", 00:15:27.479 "uuid": "66d20772-92d8-5336-90ad-ebd8bce22978", 00:15:27.479 "is_configured": true, 00:15:27.479 "data_offset": 0, 00:15:27.479 "data_size": 65536 00:15:27.479 }, 00:15:27.479 { 00:15:27.479 "name": "BaseBdev4", 00:15:27.479 "uuid": "74e7cca5-1850-5372-9b3f-dcf6a4d41af6", 00:15:27.479 "is_configured": true, 00:15:27.479 "data_offset": 0, 00:15:27.479 "data_size": 65536 00:15:27.479 } 00:15:27.479 ] 00:15:27.479 }' 00:15:27.479 12:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.479 12:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:27.479 12:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.479 12:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.479 12:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:28.417 [2024-11-19 12:35:33.457113] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:28.417 [2024-11-19 12:35:33.457231] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:28.417 [2024-11-19 12:35:33.457281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.417 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:28.417 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:28.417 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.417 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:28.417 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:28.417 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.417 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.417 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.417 12:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.417 12:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.417 12:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.417 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.417 "name": "raid_bdev1", 00:15:28.417 "uuid": "ddc780c5-9cc4-4b57-81d2-4e05a22ea9b7", 00:15:28.417 "strip_size_kb": 64, 00:15:28.417 "state": "online", 00:15:28.417 "raid_level": "raid5f", 00:15:28.417 "superblock": false, 00:15:28.417 "num_base_bdevs": 4, 00:15:28.417 "num_base_bdevs_discovered": 4, 00:15:28.417 "num_base_bdevs_operational": 4, 00:15:28.417 "base_bdevs_list": [ 00:15:28.417 { 00:15:28.417 "name": "spare", 00:15:28.417 "uuid": "3d5c77e7-222e-5d91-8984-db3ce52aaebc", 00:15:28.417 "is_configured": true, 00:15:28.417 "data_offset": 0, 00:15:28.417 "data_size": 65536 00:15:28.417 }, 00:15:28.417 { 00:15:28.417 "name": "BaseBdev2", 00:15:28.417 "uuid": "26202c78-e768-550c-bd06-237746990f5a", 00:15:28.417 "is_configured": true, 00:15:28.417 "data_offset": 0, 00:15:28.417 "data_size": 65536 00:15:28.417 }, 00:15:28.417 { 00:15:28.417 "name": "BaseBdev3", 00:15:28.417 "uuid": "66d20772-92d8-5336-90ad-ebd8bce22978", 00:15:28.417 "is_configured": true, 00:15:28.417 "data_offset": 0, 00:15:28.417 "data_size": 65536 00:15:28.417 }, 00:15:28.417 { 00:15:28.417 "name": "BaseBdev4", 00:15:28.417 "uuid": "74e7cca5-1850-5372-9b3f-dcf6a4d41af6", 00:15:28.417 "is_configured": true, 00:15:28.417 "data_offset": 0, 00:15:28.417 "data_size": 65536 00:15:28.417 } 00:15:28.417 ] 00:15:28.417 }' 00:15:28.417 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.417 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:28.417 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.417 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:28.417 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:28.417 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:28.417 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.417 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:28.417 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:28.417 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.417 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.417 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.417 12:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.417 12:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.417 12:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.677 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.677 "name": "raid_bdev1", 00:15:28.677 "uuid": "ddc780c5-9cc4-4b57-81d2-4e05a22ea9b7", 00:15:28.677 "strip_size_kb": 64, 00:15:28.677 "state": "online", 00:15:28.677 "raid_level": "raid5f", 00:15:28.677 "superblock": false, 00:15:28.677 "num_base_bdevs": 4, 00:15:28.677 "num_base_bdevs_discovered": 4, 00:15:28.677 "num_base_bdevs_operational": 4, 00:15:28.677 "base_bdevs_list": [ 00:15:28.677 { 00:15:28.677 "name": "spare", 00:15:28.677 "uuid": "3d5c77e7-222e-5d91-8984-db3ce52aaebc", 00:15:28.677 "is_configured": true, 00:15:28.677 "data_offset": 0, 00:15:28.677 "data_size": 65536 00:15:28.677 }, 00:15:28.677 { 00:15:28.677 "name": "BaseBdev2", 00:15:28.677 "uuid": "26202c78-e768-550c-bd06-237746990f5a", 00:15:28.677 "is_configured": true, 00:15:28.677 "data_offset": 0, 00:15:28.677 "data_size": 65536 00:15:28.677 }, 00:15:28.677 { 00:15:28.677 "name": "BaseBdev3", 00:15:28.677 "uuid": "66d20772-92d8-5336-90ad-ebd8bce22978", 00:15:28.677 "is_configured": true, 00:15:28.677 "data_offset": 0, 00:15:28.677 "data_size": 65536 00:15:28.677 }, 00:15:28.677 { 00:15:28.677 "name": "BaseBdev4", 00:15:28.677 "uuid": "74e7cca5-1850-5372-9b3f-dcf6a4d41af6", 00:15:28.677 "is_configured": true, 00:15:28.677 "data_offset": 0, 00:15:28.677 "data_size": 65536 00:15:28.677 } 00:15:28.677 ] 00:15:28.677 }' 00:15:28.677 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.677 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:28.677 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.677 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:28.677 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:28.677 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.677 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.677 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.677 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.677 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:28.677 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.677 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.677 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.677 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.677 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.677 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.677 12:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.677 12:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.677 12:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.677 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.677 "name": "raid_bdev1", 00:15:28.677 "uuid": "ddc780c5-9cc4-4b57-81d2-4e05a22ea9b7", 00:15:28.677 "strip_size_kb": 64, 00:15:28.677 "state": "online", 00:15:28.677 "raid_level": "raid5f", 00:15:28.677 "superblock": false, 00:15:28.677 "num_base_bdevs": 4, 00:15:28.677 "num_base_bdevs_discovered": 4, 00:15:28.677 "num_base_bdevs_operational": 4, 00:15:28.677 "base_bdevs_list": [ 00:15:28.677 { 00:15:28.677 "name": "spare", 00:15:28.677 "uuid": "3d5c77e7-222e-5d91-8984-db3ce52aaebc", 00:15:28.677 "is_configured": true, 00:15:28.677 "data_offset": 0, 00:15:28.677 "data_size": 65536 00:15:28.677 }, 00:15:28.677 { 00:15:28.677 "name": "BaseBdev2", 00:15:28.677 "uuid": "26202c78-e768-550c-bd06-237746990f5a", 00:15:28.677 "is_configured": true, 00:15:28.677 "data_offset": 0, 00:15:28.677 "data_size": 65536 00:15:28.677 }, 00:15:28.677 { 00:15:28.677 "name": "BaseBdev3", 00:15:28.677 "uuid": "66d20772-92d8-5336-90ad-ebd8bce22978", 00:15:28.677 "is_configured": true, 00:15:28.677 "data_offset": 0, 00:15:28.677 "data_size": 65536 00:15:28.677 }, 00:15:28.677 { 00:15:28.677 "name": "BaseBdev4", 00:15:28.677 "uuid": "74e7cca5-1850-5372-9b3f-dcf6a4d41af6", 00:15:28.677 "is_configured": true, 00:15:28.677 "data_offset": 0, 00:15:28.677 "data_size": 65536 00:15:28.677 } 00:15:28.677 ] 00:15:28.677 }' 00:15:28.677 12:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.677 12:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.937 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:28.937 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.937 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.937 [2024-11-19 12:35:34.169411] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:28.937 [2024-11-19 12:35:34.169457] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:28.937 [2024-11-19 12:35:34.169548] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:28.937 [2024-11-19 12:35:34.169640] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:28.937 [2024-11-19 12:35:34.169653] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:28.937 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.937 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.937 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:28.937 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.937 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.937 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.197 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:29.197 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:29.197 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:29.197 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:29.197 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:29.197 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:29.197 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:29.197 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:29.197 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:29.197 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:29.197 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:29.197 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:29.197 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:29.197 /dev/nbd0 00:15:29.457 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:29.457 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:29.457 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:29.457 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:29.457 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:29.457 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:29.457 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:29.457 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:29.457 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:29.457 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:29.457 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:29.457 1+0 records in 00:15:29.457 1+0 records out 00:15:29.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000613038 s, 6.7 MB/s 00:15:29.457 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.457 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:29.457 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.457 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:29.457 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:29.457 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:29.457 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:29.457 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:29.717 /dev/nbd1 00:15:29.717 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:29.717 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:29.717 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:29.717 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:29.717 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:29.717 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:29.717 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:29.717 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:29.717 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:29.717 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:29.717 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:29.717 1+0 records in 00:15:29.717 1+0 records out 00:15:29.717 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000621672 s, 6.6 MB/s 00:15:29.717 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.717 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:29.717 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.717 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:29.717 12:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:29.717 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:29.717 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:29.717 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:29.717 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:29.717 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:29.717 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:29.717 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:29.717 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:29.717 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:29.717 12:35:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:29.974 12:35:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:29.974 12:35:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:29.974 12:35:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:29.974 12:35:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:29.974 12:35:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:29.974 12:35:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:29.974 12:35:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:29.974 12:35:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:29.974 12:35:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:29.974 12:35:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:30.233 12:35:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:30.233 12:35:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:30.233 12:35:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:30.233 12:35:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:30.233 12:35:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:30.233 12:35:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:30.233 12:35:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:30.233 12:35:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:30.233 12:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:30.233 12:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 95247 00:15:30.233 12:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 95247 ']' 00:15:30.233 12:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 95247 00:15:30.233 12:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:15:30.233 12:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:30.233 12:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95247 00:15:30.233 12:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:30.233 12:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:30.233 12:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95247' 00:15:30.233 killing process with pid 95247 00:15:30.233 Received shutdown signal, test time was about 60.000000 seconds 00:15:30.233 00:15:30.233 Latency(us) 00:15:30.233 [2024-11-19T12:35:35.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.233 [2024-11-19T12:35:35.494Z] =================================================================================================================== 00:15:30.233 [2024-11-19T12:35:35.494Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:30.233 12:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 95247 00:15:30.233 [2024-11-19 12:35:35.364086] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:30.233 12:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 95247 00:15:30.233 [2024-11-19 12:35:35.415779] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:30.493 00:15:30.493 real 0m17.560s 00:15:30.493 user 0m21.383s 00:15:30.493 sys 0m2.457s 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.493 ************************************ 00:15:30.493 END TEST raid5f_rebuild_test 00:15:30.493 ************************************ 00:15:30.493 12:35:35 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:15:30.493 12:35:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:30.493 12:35:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:30.493 12:35:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:30.493 ************************************ 00:15:30.493 START TEST raid5f_rebuild_test_sb 00:15:30.493 ************************************ 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=95728 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 95728 00:15:30.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 95728 ']' 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:30.493 12:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.753 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:30.753 Zero copy mechanism will not be used. 00:15:30.753 [2024-11-19 12:35:35.835622] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:30.753 [2024-11-19 12:35:35.835797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95728 ] 00:15:30.753 [2024-11-19 12:35:36.003637] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.012 [2024-11-19 12:35:36.055818] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.012 [2024-11-19 12:35:36.097951] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.012 [2024-11-19 12:35:36.097993] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.581 BaseBdev1_malloc 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.581 [2024-11-19 12:35:36.716223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:31.581 [2024-11-19 12:35:36.716302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.581 [2024-11-19 12:35:36.716335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:31.581 [2024-11-19 12:35:36.716353] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.581 [2024-11-19 12:35:36.718486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.581 [2024-11-19 12:35:36.718616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:31.581 BaseBdev1 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.581 BaseBdev2_malloc 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.581 [2024-11-19 12:35:36.753406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:31.581 [2024-11-19 12:35:36.753576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.581 [2024-11-19 12:35:36.753609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:31.581 [2024-11-19 12:35:36.753621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.581 [2024-11-19 12:35:36.756220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.581 [2024-11-19 12:35:36.756256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:31.581 BaseBdev2 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.581 BaseBdev3_malloc 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.581 [2024-11-19 12:35:36.782276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:31.581 [2024-11-19 12:35:36.782350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.581 [2024-11-19 12:35:36.782377] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:31.581 [2024-11-19 12:35:36.782386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.581 [2024-11-19 12:35:36.784576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.581 [2024-11-19 12:35:36.784695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:31.581 BaseBdev3 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.581 BaseBdev4_malloc 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.581 [2024-11-19 12:35:36.811039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:31.581 [2024-11-19 12:35:36.811213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.581 [2024-11-19 12:35:36.811248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:31.581 [2024-11-19 12:35:36.811258] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.581 [2024-11-19 12:35:36.813440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.581 [2024-11-19 12:35:36.813482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:31.581 BaseBdev4 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.581 spare_malloc 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.581 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.841 spare_delay 00:15:31.841 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.841 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:31.841 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.841 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.841 [2024-11-19 12:35:36.851689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:31.841 [2024-11-19 12:35:36.851780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.841 [2024-11-19 12:35:36.851807] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:31.841 [2024-11-19 12:35:36.851817] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.841 [2024-11-19 12:35:36.853972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.841 [2024-11-19 12:35:36.854092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:31.841 spare 00:15:31.841 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.841 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:31.841 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.841 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.841 [2024-11-19 12:35:36.863783] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:31.841 [2024-11-19 12:35:36.865637] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:31.841 [2024-11-19 12:35:36.865710] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:31.841 [2024-11-19 12:35:36.865761] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:31.841 [2024-11-19 12:35:36.865950] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:31.841 [2024-11-19 12:35:36.865963] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:31.841 [2024-11-19 12:35:36.866231] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:31.841 [2024-11-19 12:35:36.866682] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:31.841 [2024-11-19 12:35:36.866696] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:31.841 [2024-11-19 12:35:36.866870] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.841 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.841 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:31.841 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.841 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.841 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.841 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.841 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:31.841 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.841 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.841 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.841 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.841 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.841 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.841 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.841 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.841 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.841 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.841 "name": "raid_bdev1", 00:15:31.841 "uuid": "9f8b8b01-91ab-4778-8237-f65ba57134a0", 00:15:31.841 "strip_size_kb": 64, 00:15:31.841 "state": "online", 00:15:31.841 "raid_level": "raid5f", 00:15:31.841 "superblock": true, 00:15:31.842 "num_base_bdevs": 4, 00:15:31.842 "num_base_bdevs_discovered": 4, 00:15:31.842 "num_base_bdevs_operational": 4, 00:15:31.842 "base_bdevs_list": [ 00:15:31.842 { 00:15:31.842 "name": "BaseBdev1", 00:15:31.842 "uuid": "8ea3c5f8-5b36-56ca-aada-c28efd4e0e11", 00:15:31.842 "is_configured": true, 00:15:31.842 "data_offset": 2048, 00:15:31.842 "data_size": 63488 00:15:31.842 }, 00:15:31.842 { 00:15:31.842 "name": "BaseBdev2", 00:15:31.842 "uuid": "ba94fb99-1c2c-5b04-b718-c1c5001d5741", 00:15:31.842 "is_configured": true, 00:15:31.842 "data_offset": 2048, 00:15:31.842 "data_size": 63488 00:15:31.842 }, 00:15:31.842 { 00:15:31.842 "name": "BaseBdev3", 00:15:31.842 "uuid": "7f4af64a-13cb-5bab-8995-dedaced35e33", 00:15:31.842 "is_configured": true, 00:15:31.842 "data_offset": 2048, 00:15:31.842 "data_size": 63488 00:15:31.842 }, 00:15:31.842 { 00:15:31.842 "name": "BaseBdev4", 00:15:31.842 "uuid": "e150da9d-9af2-51dd-a571-1ea479e17c64", 00:15:31.842 "is_configured": true, 00:15:31.842 "data_offset": 2048, 00:15:31.842 "data_size": 63488 00:15:31.842 } 00:15:31.842 ] 00:15:31.842 }' 00:15:31.842 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.842 12:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.100 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:32.100 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.101 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.101 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:32.101 [2024-11-19 12:35:37.336154] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:32.101 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.366 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:15:32.366 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.366 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.366 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.366 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:32.366 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.366 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:32.366 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:32.366 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:32.366 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:32.366 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:32.366 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:32.366 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:32.366 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:32.366 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:32.366 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:32.366 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:32.366 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:32.366 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:32.366 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:32.640 [2024-11-19 12:35:37.631482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:32.640 /dev/nbd0 00:15:32.641 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:32.641 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:32.641 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:32.641 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:32.641 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:32.641 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:32.641 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:32.641 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:32.641 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:32.641 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:32.641 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:32.641 1+0 records in 00:15:32.641 1+0 records out 00:15:32.641 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429322 s, 9.5 MB/s 00:15:32.641 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.641 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:32.641 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.641 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:32.641 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:32.641 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:32.641 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:32.641 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:32.641 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:15:32.641 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:15:32.641 12:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:15:32.915 496+0 records in 00:15:32.915 496+0 records out 00:15:32.915 97517568 bytes (98 MB, 93 MiB) copied, 0.403669 s, 242 MB/s 00:15:32.915 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:32.915 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:32.915 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:32.915 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:32.915 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:32.915 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:32.915 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:33.174 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:33.174 [2024-11-19 12:35:38.336696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.175 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:33.175 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:33.175 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:33.175 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:33.175 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:33.175 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:33.175 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:33.175 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:33.175 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.175 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.175 [2024-11-19 12:35:38.360737] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:33.175 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.175 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:33.175 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.175 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.175 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.175 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.175 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:33.175 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.175 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.175 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.175 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.175 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.175 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.175 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.175 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.175 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.175 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.175 "name": "raid_bdev1", 00:15:33.175 "uuid": "9f8b8b01-91ab-4778-8237-f65ba57134a0", 00:15:33.175 "strip_size_kb": 64, 00:15:33.175 "state": "online", 00:15:33.175 "raid_level": "raid5f", 00:15:33.175 "superblock": true, 00:15:33.175 "num_base_bdevs": 4, 00:15:33.175 "num_base_bdevs_discovered": 3, 00:15:33.175 "num_base_bdevs_operational": 3, 00:15:33.175 "base_bdevs_list": [ 00:15:33.175 { 00:15:33.175 "name": null, 00:15:33.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.175 "is_configured": false, 00:15:33.175 "data_offset": 0, 00:15:33.175 "data_size": 63488 00:15:33.175 }, 00:15:33.175 { 00:15:33.175 "name": "BaseBdev2", 00:15:33.175 "uuid": "ba94fb99-1c2c-5b04-b718-c1c5001d5741", 00:15:33.175 "is_configured": true, 00:15:33.175 "data_offset": 2048, 00:15:33.175 "data_size": 63488 00:15:33.175 }, 00:15:33.175 { 00:15:33.175 "name": "BaseBdev3", 00:15:33.175 "uuid": "7f4af64a-13cb-5bab-8995-dedaced35e33", 00:15:33.175 "is_configured": true, 00:15:33.175 "data_offset": 2048, 00:15:33.175 "data_size": 63488 00:15:33.175 }, 00:15:33.175 { 00:15:33.175 "name": "BaseBdev4", 00:15:33.175 "uuid": "e150da9d-9af2-51dd-a571-1ea479e17c64", 00:15:33.175 "is_configured": true, 00:15:33.175 "data_offset": 2048, 00:15:33.175 "data_size": 63488 00:15:33.175 } 00:15:33.175 ] 00:15:33.175 }' 00:15:33.175 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.175 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.743 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:33.743 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.743 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.743 [2024-11-19 12:35:38.820036] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:33.743 [2024-11-19 12:35:38.823582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a8b0 00:15:33.743 [2024-11-19 12:35:38.825878] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:33.743 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.743 12:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:34.680 12:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:34.681 12:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.681 12:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:34.681 12:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:34.681 12:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.681 12:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.681 12:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.681 12:35:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.681 12:35:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.681 12:35:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.681 12:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.681 "name": "raid_bdev1", 00:15:34.681 "uuid": "9f8b8b01-91ab-4778-8237-f65ba57134a0", 00:15:34.681 "strip_size_kb": 64, 00:15:34.681 "state": "online", 00:15:34.681 "raid_level": "raid5f", 00:15:34.681 "superblock": true, 00:15:34.681 "num_base_bdevs": 4, 00:15:34.681 "num_base_bdevs_discovered": 4, 00:15:34.681 "num_base_bdevs_operational": 4, 00:15:34.681 "process": { 00:15:34.681 "type": "rebuild", 00:15:34.681 "target": "spare", 00:15:34.681 "progress": { 00:15:34.681 "blocks": 19200, 00:15:34.681 "percent": 10 00:15:34.681 } 00:15:34.681 }, 00:15:34.681 "base_bdevs_list": [ 00:15:34.681 { 00:15:34.681 "name": "spare", 00:15:34.681 "uuid": "f9c2fecd-83c8-5566-ac7f-0637362a1083", 00:15:34.681 "is_configured": true, 00:15:34.681 "data_offset": 2048, 00:15:34.681 "data_size": 63488 00:15:34.681 }, 00:15:34.681 { 00:15:34.681 "name": "BaseBdev2", 00:15:34.681 "uuid": "ba94fb99-1c2c-5b04-b718-c1c5001d5741", 00:15:34.681 "is_configured": true, 00:15:34.681 "data_offset": 2048, 00:15:34.681 "data_size": 63488 00:15:34.681 }, 00:15:34.681 { 00:15:34.681 "name": "BaseBdev3", 00:15:34.681 "uuid": "7f4af64a-13cb-5bab-8995-dedaced35e33", 00:15:34.681 "is_configured": true, 00:15:34.681 "data_offset": 2048, 00:15:34.681 "data_size": 63488 00:15:34.681 }, 00:15:34.681 { 00:15:34.681 "name": "BaseBdev4", 00:15:34.681 "uuid": "e150da9d-9af2-51dd-a571-1ea479e17c64", 00:15:34.681 "is_configured": true, 00:15:34.681 "data_offset": 2048, 00:15:34.681 "data_size": 63488 00:15:34.681 } 00:15:34.681 ] 00:15:34.681 }' 00:15:34.681 12:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.681 12:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:34.681 12:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.940 12:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:34.940 12:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:34.940 12:35:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.940 12:35:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.940 [2024-11-19 12:35:39.972812] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:34.940 [2024-11-19 12:35:40.034519] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:34.940 [2024-11-19 12:35:40.034729] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.940 [2024-11-19 12:35:40.034768] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:34.940 [2024-11-19 12:35:40.034778] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:34.940 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.940 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:34.940 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.940 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.940 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.940 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.940 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:34.940 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.940 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.940 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.940 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.940 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.940 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.940 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.940 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.940 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.940 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.940 "name": "raid_bdev1", 00:15:34.940 "uuid": "9f8b8b01-91ab-4778-8237-f65ba57134a0", 00:15:34.940 "strip_size_kb": 64, 00:15:34.940 "state": "online", 00:15:34.940 "raid_level": "raid5f", 00:15:34.940 "superblock": true, 00:15:34.940 "num_base_bdevs": 4, 00:15:34.940 "num_base_bdevs_discovered": 3, 00:15:34.940 "num_base_bdevs_operational": 3, 00:15:34.940 "base_bdevs_list": [ 00:15:34.940 { 00:15:34.940 "name": null, 00:15:34.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.940 "is_configured": false, 00:15:34.940 "data_offset": 0, 00:15:34.940 "data_size": 63488 00:15:34.940 }, 00:15:34.940 { 00:15:34.940 "name": "BaseBdev2", 00:15:34.940 "uuid": "ba94fb99-1c2c-5b04-b718-c1c5001d5741", 00:15:34.940 "is_configured": true, 00:15:34.940 "data_offset": 2048, 00:15:34.940 "data_size": 63488 00:15:34.940 }, 00:15:34.940 { 00:15:34.940 "name": "BaseBdev3", 00:15:34.940 "uuid": "7f4af64a-13cb-5bab-8995-dedaced35e33", 00:15:34.940 "is_configured": true, 00:15:34.940 "data_offset": 2048, 00:15:34.940 "data_size": 63488 00:15:34.940 }, 00:15:34.940 { 00:15:34.940 "name": "BaseBdev4", 00:15:34.940 "uuid": "e150da9d-9af2-51dd-a571-1ea479e17c64", 00:15:34.940 "is_configured": true, 00:15:34.940 "data_offset": 2048, 00:15:34.940 "data_size": 63488 00:15:34.940 } 00:15:34.940 ] 00:15:34.940 }' 00:15:34.940 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.940 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.509 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:35.509 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.509 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:35.509 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:35.509 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.509 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.509 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.509 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.509 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.509 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.509 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.509 "name": "raid_bdev1", 00:15:35.509 "uuid": "9f8b8b01-91ab-4778-8237-f65ba57134a0", 00:15:35.509 "strip_size_kb": 64, 00:15:35.509 "state": "online", 00:15:35.509 "raid_level": "raid5f", 00:15:35.509 "superblock": true, 00:15:35.509 "num_base_bdevs": 4, 00:15:35.509 "num_base_bdevs_discovered": 3, 00:15:35.509 "num_base_bdevs_operational": 3, 00:15:35.509 "base_bdevs_list": [ 00:15:35.509 { 00:15:35.509 "name": null, 00:15:35.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.509 "is_configured": false, 00:15:35.509 "data_offset": 0, 00:15:35.509 "data_size": 63488 00:15:35.509 }, 00:15:35.509 { 00:15:35.509 "name": "BaseBdev2", 00:15:35.509 "uuid": "ba94fb99-1c2c-5b04-b718-c1c5001d5741", 00:15:35.509 "is_configured": true, 00:15:35.509 "data_offset": 2048, 00:15:35.509 "data_size": 63488 00:15:35.509 }, 00:15:35.509 { 00:15:35.509 "name": "BaseBdev3", 00:15:35.509 "uuid": "7f4af64a-13cb-5bab-8995-dedaced35e33", 00:15:35.509 "is_configured": true, 00:15:35.509 "data_offset": 2048, 00:15:35.509 "data_size": 63488 00:15:35.509 }, 00:15:35.509 { 00:15:35.509 "name": "BaseBdev4", 00:15:35.509 "uuid": "e150da9d-9af2-51dd-a571-1ea479e17c64", 00:15:35.509 "is_configured": true, 00:15:35.509 "data_offset": 2048, 00:15:35.509 "data_size": 63488 00:15:35.509 } 00:15:35.509 ] 00:15:35.509 }' 00:15:35.509 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.509 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:35.509 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.509 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:35.510 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:35.510 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.510 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.510 [2024-11-19 12:35:40.651475] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:35.510 [2024-11-19 12:35:40.654856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a980 00:15:35.510 [2024-11-19 12:35:40.657167] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:35.510 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.510 12:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:36.447 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.447 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.447 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.447 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.447 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.447 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.447 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.447 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.447 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.447 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.706 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.706 "name": "raid_bdev1", 00:15:36.706 "uuid": "9f8b8b01-91ab-4778-8237-f65ba57134a0", 00:15:36.706 "strip_size_kb": 64, 00:15:36.706 "state": "online", 00:15:36.706 "raid_level": "raid5f", 00:15:36.706 "superblock": true, 00:15:36.706 "num_base_bdevs": 4, 00:15:36.706 "num_base_bdevs_discovered": 4, 00:15:36.706 "num_base_bdevs_operational": 4, 00:15:36.706 "process": { 00:15:36.706 "type": "rebuild", 00:15:36.706 "target": "spare", 00:15:36.706 "progress": { 00:15:36.706 "blocks": 19200, 00:15:36.706 "percent": 10 00:15:36.706 } 00:15:36.706 }, 00:15:36.706 "base_bdevs_list": [ 00:15:36.706 { 00:15:36.706 "name": "spare", 00:15:36.706 "uuid": "f9c2fecd-83c8-5566-ac7f-0637362a1083", 00:15:36.706 "is_configured": true, 00:15:36.706 "data_offset": 2048, 00:15:36.706 "data_size": 63488 00:15:36.706 }, 00:15:36.706 { 00:15:36.706 "name": "BaseBdev2", 00:15:36.706 "uuid": "ba94fb99-1c2c-5b04-b718-c1c5001d5741", 00:15:36.706 "is_configured": true, 00:15:36.706 "data_offset": 2048, 00:15:36.707 "data_size": 63488 00:15:36.707 }, 00:15:36.707 { 00:15:36.707 "name": "BaseBdev3", 00:15:36.707 "uuid": "7f4af64a-13cb-5bab-8995-dedaced35e33", 00:15:36.707 "is_configured": true, 00:15:36.707 "data_offset": 2048, 00:15:36.707 "data_size": 63488 00:15:36.707 }, 00:15:36.707 { 00:15:36.707 "name": "BaseBdev4", 00:15:36.707 "uuid": "e150da9d-9af2-51dd-a571-1ea479e17c64", 00:15:36.707 "is_configured": true, 00:15:36.707 "data_offset": 2048, 00:15:36.707 "data_size": 63488 00:15:36.707 } 00:15:36.707 ] 00:15:36.707 }' 00:15:36.707 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.707 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:36.707 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.707 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.707 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:36.707 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:36.707 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:36.707 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:36.707 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:36.707 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=534 00:15:36.707 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:36.707 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.707 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.707 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.707 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.707 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.707 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.707 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.707 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.707 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.707 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.707 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.707 "name": "raid_bdev1", 00:15:36.707 "uuid": "9f8b8b01-91ab-4778-8237-f65ba57134a0", 00:15:36.707 "strip_size_kb": 64, 00:15:36.707 "state": "online", 00:15:36.707 "raid_level": "raid5f", 00:15:36.707 "superblock": true, 00:15:36.707 "num_base_bdevs": 4, 00:15:36.707 "num_base_bdevs_discovered": 4, 00:15:36.707 "num_base_bdevs_operational": 4, 00:15:36.707 "process": { 00:15:36.707 "type": "rebuild", 00:15:36.707 "target": "spare", 00:15:36.707 "progress": { 00:15:36.707 "blocks": 21120, 00:15:36.707 "percent": 11 00:15:36.707 } 00:15:36.707 }, 00:15:36.707 "base_bdevs_list": [ 00:15:36.707 { 00:15:36.707 "name": "spare", 00:15:36.707 "uuid": "f9c2fecd-83c8-5566-ac7f-0637362a1083", 00:15:36.707 "is_configured": true, 00:15:36.707 "data_offset": 2048, 00:15:36.707 "data_size": 63488 00:15:36.707 }, 00:15:36.707 { 00:15:36.707 "name": "BaseBdev2", 00:15:36.707 "uuid": "ba94fb99-1c2c-5b04-b718-c1c5001d5741", 00:15:36.707 "is_configured": true, 00:15:36.707 "data_offset": 2048, 00:15:36.707 "data_size": 63488 00:15:36.707 }, 00:15:36.707 { 00:15:36.707 "name": "BaseBdev3", 00:15:36.707 "uuid": "7f4af64a-13cb-5bab-8995-dedaced35e33", 00:15:36.707 "is_configured": true, 00:15:36.707 "data_offset": 2048, 00:15:36.707 "data_size": 63488 00:15:36.707 }, 00:15:36.707 { 00:15:36.707 "name": "BaseBdev4", 00:15:36.707 "uuid": "e150da9d-9af2-51dd-a571-1ea479e17c64", 00:15:36.707 "is_configured": true, 00:15:36.707 "data_offset": 2048, 00:15:36.707 "data_size": 63488 00:15:36.707 } 00:15:36.707 ] 00:15:36.707 }' 00:15:36.707 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.707 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:36.707 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.966 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.966 12:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:37.904 12:35:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:37.904 12:35:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.904 12:35:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.904 12:35:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.904 12:35:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.904 12:35:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.904 12:35:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.904 12:35:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.904 12:35:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.904 12:35:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.904 12:35:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.904 12:35:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.904 "name": "raid_bdev1", 00:15:37.904 "uuid": "9f8b8b01-91ab-4778-8237-f65ba57134a0", 00:15:37.904 "strip_size_kb": 64, 00:15:37.904 "state": "online", 00:15:37.904 "raid_level": "raid5f", 00:15:37.904 "superblock": true, 00:15:37.904 "num_base_bdevs": 4, 00:15:37.904 "num_base_bdevs_discovered": 4, 00:15:37.904 "num_base_bdevs_operational": 4, 00:15:37.904 "process": { 00:15:37.904 "type": "rebuild", 00:15:37.904 "target": "spare", 00:15:37.904 "progress": { 00:15:37.904 "blocks": 44160, 00:15:37.904 "percent": 23 00:15:37.904 } 00:15:37.904 }, 00:15:37.904 "base_bdevs_list": [ 00:15:37.904 { 00:15:37.904 "name": "spare", 00:15:37.904 "uuid": "f9c2fecd-83c8-5566-ac7f-0637362a1083", 00:15:37.904 "is_configured": true, 00:15:37.904 "data_offset": 2048, 00:15:37.904 "data_size": 63488 00:15:37.904 }, 00:15:37.904 { 00:15:37.904 "name": "BaseBdev2", 00:15:37.904 "uuid": "ba94fb99-1c2c-5b04-b718-c1c5001d5741", 00:15:37.904 "is_configured": true, 00:15:37.904 "data_offset": 2048, 00:15:37.904 "data_size": 63488 00:15:37.904 }, 00:15:37.904 { 00:15:37.904 "name": "BaseBdev3", 00:15:37.904 "uuid": "7f4af64a-13cb-5bab-8995-dedaced35e33", 00:15:37.904 "is_configured": true, 00:15:37.904 "data_offset": 2048, 00:15:37.904 "data_size": 63488 00:15:37.904 }, 00:15:37.904 { 00:15:37.904 "name": "BaseBdev4", 00:15:37.904 "uuid": "e150da9d-9af2-51dd-a571-1ea479e17c64", 00:15:37.904 "is_configured": true, 00:15:37.904 "data_offset": 2048, 00:15:37.904 "data_size": 63488 00:15:37.904 } 00:15:37.904 ] 00:15:37.904 }' 00:15:37.904 12:35:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.904 12:35:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:37.904 12:35:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.904 12:35:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.904 12:35:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:39.282 12:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:39.282 12:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.282 12:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.282 12:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.282 12:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.282 12:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.282 12:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.282 12:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.282 12:35:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.282 12:35:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.282 12:35:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.282 12:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.282 "name": "raid_bdev1", 00:15:39.282 "uuid": "9f8b8b01-91ab-4778-8237-f65ba57134a0", 00:15:39.282 "strip_size_kb": 64, 00:15:39.282 "state": "online", 00:15:39.282 "raid_level": "raid5f", 00:15:39.282 "superblock": true, 00:15:39.282 "num_base_bdevs": 4, 00:15:39.282 "num_base_bdevs_discovered": 4, 00:15:39.282 "num_base_bdevs_operational": 4, 00:15:39.282 "process": { 00:15:39.282 "type": "rebuild", 00:15:39.282 "target": "spare", 00:15:39.282 "progress": { 00:15:39.282 "blocks": 65280, 00:15:39.282 "percent": 34 00:15:39.282 } 00:15:39.282 }, 00:15:39.282 "base_bdevs_list": [ 00:15:39.282 { 00:15:39.282 "name": "spare", 00:15:39.282 "uuid": "f9c2fecd-83c8-5566-ac7f-0637362a1083", 00:15:39.282 "is_configured": true, 00:15:39.282 "data_offset": 2048, 00:15:39.282 "data_size": 63488 00:15:39.282 }, 00:15:39.282 { 00:15:39.282 "name": "BaseBdev2", 00:15:39.282 "uuid": "ba94fb99-1c2c-5b04-b718-c1c5001d5741", 00:15:39.282 "is_configured": true, 00:15:39.282 "data_offset": 2048, 00:15:39.282 "data_size": 63488 00:15:39.282 }, 00:15:39.282 { 00:15:39.282 "name": "BaseBdev3", 00:15:39.282 "uuid": "7f4af64a-13cb-5bab-8995-dedaced35e33", 00:15:39.282 "is_configured": true, 00:15:39.282 "data_offset": 2048, 00:15:39.282 "data_size": 63488 00:15:39.282 }, 00:15:39.282 { 00:15:39.282 "name": "BaseBdev4", 00:15:39.282 "uuid": "e150da9d-9af2-51dd-a571-1ea479e17c64", 00:15:39.282 "is_configured": true, 00:15:39.282 "data_offset": 2048, 00:15:39.282 "data_size": 63488 00:15:39.282 } 00:15:39.282 ] 00:15:39.282 }' 00:15:39.282 12:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.282 12:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:39.282 12:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.282 12:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:39.282 12:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:40.218 12:35:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:40.218 12:35:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:40.218 12:35:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.219 12:35:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:40.219 12:35:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:40.219 12:35:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.219 12:35:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.219 12:35:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.219 12:35:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.219 12:35:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.219 12:35:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.219 12:35:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.219 "name": "raid_bdev1", 00:15:40.219 "uuid": "9f8b8b01-91ab-4778-8237-f65ba57134a0", 00:15:40.219 "strip_size_kb": 64, 00:15:40.219 "state": "online", 00:15:40.219 "raid_level": "raid5f", 00:15:40.219 "superblock": true, 00:15:40.219 "num_base_bdevs": 4, 00:15:40.219 "num_base_bdevs_discovered": 4, 00:15:40.219 "num_base_bdevs_operational": 4, 00:15:40.219 "process": { 00:15:40.219 "type": "rebuild", 00:15:40.219 "target": "spare", 00:15:40.219 "progress": { 00:15:40.219 "blocks": 86400, 00:15:40.219 "percent": 45 00:15:40.219 } 00:15:40.219 }, 00:15:40.219 "base_bdevs_list": [ 00:15:40.219 { 00:15:40.219 "name": "spare", 00:15:40.219 "uuid": "f9c2fecd-83c8-5566-ac7f-0637362a1083", 00:15:40.219 "is_configured": true, 00:15:40.219 "data_offset": 2048, 00:15:40.219 "data_size": 63488 00:15:40.219 }, 00:15:40.219 { 00:15:40.219 "name": "BaseBdev2", 00:15:40.219 "uuid": "ba94fb99-1c2c-5b04-b718-c1c5001d5741", 00:15:40.219 "is_configured": true, 00:15:40.219 "data_offset": 2048, 00:15:40.219 "data_size": 63488 00:15:40.219 }, 00:15:40.219 { 00:15:40.219 "name": "BaseBdev3", 00:15:40.219 "uuid": "7f4af64a-13cb-5bab-8995-dedaced35e33", 00:15:40.219 "is_configured": true, 00:15:40.219 "data_offset": 2048, 00:15:40.219 "data_size": 63488 00:15:40.219 }, 00:15:40.219 { 00:15:40.219 "name": "BaseBdev4", 00:15:40.219 "uuid": "e150da9d-9af2-51dd-a571-1ea479e17c64", 00:15:40.219 "is_configured": true, 00:15:40.219 "data_offset": 2048, 00:15:40.219 "data_size": 63488 00:15:40.219 } 00:15:40.219 ] 00:15:40.219 }' 00:15:40.219 12:35:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.219 12:35:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:40.219 12:35:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.219 12:35:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:40.219 12:35:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:41.597 12:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:41.597 12:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:41.597 12:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.597 12:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:41.597 12:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:41.597 12:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.597 12:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.597 12:35:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.597 12:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.597 12:35:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.597 12:35:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.597 12:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.597 "name": "raid_bdev1", 00:15:41.597 "uuid": "9f8b8b01-91ab-4778-8237-f65ba57134a0", 00:15:41.597 "strip_size_kb": 64, 00:15:41.597 "state": "online", 00:15:41.597 "raid_level": "raid5f", 00:15:41.597 "superblock": true, 00:15:41.597 "num_base_bdevs": 4, 00:15:41.597 "num_base_bdevs_discovered": 4, 00:15:41.597 "num_base_bdevs_operational": 4, 00:15:41.597 "process": { 00:15:41.597 "type": "rebuild", 00:15:41.597 "target": "spare", 00:15:41.597 "progress": { 00:15:41.597 "blocks": 109440, 00:15:41.597 "percent": 57 00:15:41.597 } 00:15:41.597 }, 00:15:41.597 "base_bdevs_list": [ 00:15:41.597 { 00:15:41.597 "name": "spare", 00:15:41.597 "uuid": "f9c2fecd-83c8-5566-ac7f-0637362a1083", 00:15:41.597 "is_configured": true, 00:15:41.597 "data_offset": 2048, 00:15:41.597 "data_size": 63488 00:15:41.597 }, 00:15:41.597 { 00:15:41.597 "name": "BaseBdev2", 00:15:41.597 "uuid": "ba94fb99-1c2c-5b04-b718-c1c5001d5741", 00:15:41.597 "is_configured": true, 00:15:41.597 "data_offset": 2048, 00:15:41.597 "data_size": 63488 00:15:41.597 }, 00:15:41.597 { 00:15:41.597 "name": "BaseBdev3", 00:15:41.597 "uuid": "7f4af64a-13cb-5bab-8995-dedaced35e33", 00:15:41.597 "is_configured": true, 00:15:41.597 "data_offset": 2048, 00:15:41.597 "data_size": 63488 00:15:41.597 }, 00:15:41.597 { 00:15:41.597 "name": "BaseBdev4", 00:15:41.597 "uuid": "e150da9d-9af2-51dd-a571-1ea479e17c64", 00:15:41.597 "is_configured": true, 00:15:41.597 "data_offset": 2048, 00:15:41.597 "data_size": 63488 00:15:41.597 } 00:15:41.597 ] 00:15:41.597 }' 00:15:41.597 12:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.597 12:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:41.597 12:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.597 12:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:41.597 12:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:42.543 12:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:42.543 12:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.543 12:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.543 12:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.543 12:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.543 12:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.543 12:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.543 12:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.543 12:35:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.543 12:35:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.543 12:35:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.543 12:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.543 "name": "raid_bdev1", 00:15:42.543 "uuid": "9f8b8b01-91ab-4778-8237-f65ba57134a0", 00:15:42.543 "strip_size_kb": 64, 00:15:42.543 "state": "online", 00:15:42.543 "raid_level": "raid5f", 00:15:42.543 "superblock": true, 00:15:42.543 "num_base_bdevs": 4, 00:15:42.543 "num_base_bdevs_discovered": 4, 00:15:42.543 "num_base_bdevs_operational": 4, 00:15:42.543 "process": { 00:15:42.543 "type": "rebuild", 00:15:42.543 "target": "spare", 00:15:42.543 "progress": { 00:15:42.543 "blocks": 130560, 00:15:42.543 "percent": 68 00:15:42.543 } 00:15:42.543 }, 00:15:42.543 "base_bdevs_list": [ 00:15:42.543 { 00:15:42.543 "name": "spare", 00:15:42.543 "uuid": "f9c2fecd-83c8-5566-ac7f-0637362a1083", 00:15:42.543 "is_configured": true, 00:15:42.543 "data_offset": 2048, 00:15:42.543 "data_size": 63488 00:15:42.543 }, 00:15:42.543 { 00:15:42.543 "name": "BaseBdev2", 00:15:42.543 "uuid": "ba94fb99-1c2c-5b04-b718-c1c5001d5741", 00:15:42.543 "is_configured": true, 00:15:42.543 "data_offset": 2048, 00:15:42.543 "data_size": 63488 00:15:42.543 }, 00:15:42.543 { 00:15:42.543 "name": "BaseBdev3", 00:15:42.543 "uuid": "7f4af64a-13cb-5bab-8995-dedaced35e33", 00:15:42.543 "is_configured": true, 00:15:42.543 "data_offset": 2048, 00:15:42.543 "data_size": 63488 00:15:42.543 }, 00:15:42.543 { 00:15:42.543 "name": "BaseBdev4", 00:15:42.543 "uuid": "e150da9d-9af2-51dd-a571-1ea479e17c64", 00:15:42.543 "is_configured": true, 00:15:42.543 "data_offset": 2048, 00:15:42.543 "data_size": 63488 00:15:42.543 } 00:15:42.543 ] 00:15:42.543 }' 00:15:42.543 12:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.543 12:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:42.543 12:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.543 12:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:42.543 12:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:43.495 12:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:43.495 12:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.495 12:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.495 12:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.495 12:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.495 12:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.495 12:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.495 12:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.495 12:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.495 12:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.495 12:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.753 12:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.753 "name": "raid_bdev1", 00:15:43.753 "uuid": "9f8b8b01-91ab-4778-8237-f65ba57134a0", 00:15:43.753 "strip_size_kb": 64, 00:15:43.753 "state": "online", 00:15:43.753 "raid_level": "raid5f", 00:15:43.753 "superblock": true, 00:15:43.753 "num_base_bdevs": 4, 00:15:43.753 "num_base_bdevs_discovered": 4, 00:15:43.753 "num_base_bdevs_operational": 4, 00:15:43.753 "process": { 00:15:43.753 "type": "rebuild", 00:15:43.753 "target": "spare", 00:15:43.753 "progress": { 00:15:43.753 "blocks": 153600, 00:15:43.753 "percent": 80 00:15:43.753 } 00:15:43.753 }, 00:15:43.753 "base_bdevs_list": [ 00:15:43.753 { 00:15:43.753 "name": "spare", 00:15:43.753 "uuid": "f9c2fecd-83c8-5566-ac7f-0637362a1083", 00:15:43.753 "is_configured": true, 00:15:43.753 "data_offset": 2048, 00:15:43.753 "data_size": 63488 00:15:43.753 }, 00:15:43.753 { 00:15:43.753 "name": "BaseBdev2", 00:15:43.753 "uuid": "ba94fb99-1c2c-5b04-b718-c1c5001d5741", 00:15:43.753 "is_configured": true, 00:15:43.753 "data_offset": 2048, 00:15:43.753 "data_size": 63488 00:15:43.753 }, 00:15:43.753 { 00:15:43.753 "name": "BaseBdev3", 00:15:43.753 "uuid": "7f4af64a-13cb-5bab-8995-dedaced35e33", 00:15:43.753 "is_configured": true, 00:15:43.753 "data_offset": 2048, 00:15:43.753 "data_size": 63488 00:15:43.753 }, 00:15:43.753 { 00:15:43.753 "name": "BaseBdev4", 00:15:43.753 "uuid": "e150da9d-9af2-51dd-a571-1ea479e17c64", 00:15:43.753 "is_configured": true, 00:15:43.753 "data_offset": 2048, 00:15:43.753 "data_size": 63488 00:15:43.753 } 00:15:43.753 ] 00:15:43.753 }' 00:15:43.753 12:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.753 12:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.753 12:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.753 12:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.753 12:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:44.691 12:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:44.691 12:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.691 12:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.691 12:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.691 12:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.691 12:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.691 12:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.691 12:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.691 12:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.691 12:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.691 12:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.691 12:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.691 "name": "raid_bdev1", 00:15:44.691 "uuid": "9f8b8b01-91ab-4778-8237-f65ba57134a0", 00:15:44.691 "strip_size_kb": 64, 00:15:44.691 "state": "online", 00:15:44.691 "raid_level": "raid5f", 00:15:44.691 "superblock": true, 00:15:44.691 "num_base_bdevs": 4, 00:15:44.691 "num_base_bdevs_discovered": 4, 00:15:44.691 "num_base_bdevs_operational": 4, 00:15:44.691 "process": { 00:15:44.691 "type": "rebuild", 00:15:44.691 "target": "spare", 00:15:44.691 "progress": { 00:15:44.691 "blocks": 174720, 00:15:44.691 "percent": 91 00:15:44.691 } 00:15:44.691 }, 00:15:44.691 "base_bdevs_list": [ 00:15:44.691 { 00:15:44.691 "name": "spare", 00:15:44.691 "uuid": "f9c2fecd-83c8-5566-ac7f-0637362a1083", 00:15:44.691 "is_configured": true, 00:15:44.691 "data_offset": 2048, 00:15:44.691 "data_size": 63488 00:15:44.691 }, 00:15:44.691 { 00:15:44.691 "name": "BaseBdev2", 00:15:44.691 "uuid": "ba94fb99-1c2c-5b04-b718-c1c5001d5741", 00:15:44.691 "is_configured": true, 00:15:44.691 "data_offset": 2048, 00:15:44.691 "data_size": 63488 00:15:44.691 }, 00:15:44.691 { 00:15:44.691 "name": "BaseBdev3", 00:15:44.691 "uuid": "7f4af64a-13cb-5bab-8995-dedaced35e33", 00:15:44.691 "is_configured": true, 00:15:44.691 "data_offset": 2048, 00:15:44.691 "data_size": 63488 00:15:44.691 }, 00:15:44.691 { 00:15:44.691 "name": "BaseBdev4", 00:15:44.691 "uuid": "e150da9d-9af2-51dd-a571-1ea479e17c64", 00:15:44.691 "is_configured": true, 00:15:44.692 "data_offset": 2048, 00:15:44.692 "data_size": 63488 00:15:44.692 } 00:15:44.692 ] 00:15:44.692 }' 00:15:44.692 12:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.692 12:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.692 12:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.951 12:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.951 12:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:45.519 [2024-11-19 12:35:50.724697] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:45.519 [2024-11-19 12:35:50.724816] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:45.519 [2024-11-19 12:35:50.724971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.779 12:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:45.779 12:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.779 12:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.779 12:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.779 12:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.779 12:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.779 12:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.779 12:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.779 12:35:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.779 12:35:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.779 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.039 "name": "raid_bdev1", 00:15:46.039 "uuid": "9f8b8b01-91ab-4778-8237-f65ba57134a0", 00:15:46.039 "strip_size_kb": 64, 00:15:46.039 "state": "online", 00:15:46.039 "raid_level": "raid5f", 00:15:46.039 "superblock": true, 00:15:46.039 "num_base_bdevs": 4, 00:15:46.039 "num_base_bdevs_discovered": 4, 00:15:46.039 "num_base_bdevs_operational": 4, 00:15:46.039 "base_bdevs_list": [ 00:15:46.039 { 00:15:46.039 "name": "spare", 00:15:46.039 "uuid": "f9c2fecd-83c8-5566-ac7f-0637362a1083", 00:15:46.039 "is_configured": true, 00:15:46.039 "data_offset": 2048, 00:15:46.039 "data_size": 63488 00:15:46.039 }, 00:15:46.039 { 00:15:46.039 "name": "BaseBdev2", 00:15:46.039 "uuid": "ba94fb99-1c2c-5b04-b718-c1c5001d5741", 00:15:46.039 "is_configured": true, 00:15:46.039 "data_offset": 2048, 00:15:46.039 "data_size": 63488 00:15:46.039 }, 00:15:46.039 { 00:15:46.039 "name": "BaseBdev3", 00:15:46.039 "uuid": "7f4af64a-13cb-5bab-8995-dedaced35e33", 00:15:46.039 "is_configured": true, 00:15:46.039 "data_offset": 2048, 00:15:46.039 "data_size": 63488 00:15:46.039 }, 00:15:46.039 { 00:15:46.039 "name": "BaseBdev4", 00:15:46.039 "uuid": "e150da9d-9af2-51dd-a571-1ea479e17c64", 00:15:46.039 "is_configured": true, 00:15:46.039 "data_offset": 2048, 00:15:46.039 "data_size": 63488 00:15:46.039 } 00:15:46.039 ] 00:15:46.039 }' 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.039 "name": "raid_bdev1", 00:15:46.039 "uuid": "9f8b8b01-91ab-4778-8237-f65ba57134a0", 00:15:46.039 "strip_size_kb": 64, 00:15:46.039 "state": "online", 00:15:46.039 "raid_level": "raid5f", 00:15:46.039 "superblock": true, 00:15:46.039 "num_base_bdevs": 4, 00:15:46.039 "num_base_bdevs_discovered": 4, 00:15:46.039 "num_base_bdevs_operational": 4, 00:15:46.039 "base_bdevs_list": [ 00:15:46.039 { 00:15:46.039 "name": "spare", 00:15:46.039 "uuid": "f9c2fecd-83c8-5566-ac7f-0637362a1083", 00:15:46.039 "is_configured": true, 00:15:46.039 "data_offset": 2048, 00:15:46.039 "data_size": 63488 00:15:46.039 }, 00:15:46.039 { 00:15:46.039 "name": "BaseBdev2", 00:15:46.039 "uuid": "ba94fb99-1c2c-5b04-b718-c1c5001d5741", 00:15:46.039 "is_configured": true, 00:15:46.039 "data_offset": 2048, 00:15:46.039 "data_size": 63488 00:15:46.039 }, 00:15:46.039 { 00:15:46.039 "name": "BaseBdev3", 00:15:46.039 "uuid": "7f4af64a-13cb-5bab-8995-dedaced35e33", 00:15:46.039 "is_configured": true, 00:15:46.039 "data_offset": 2048, 00:15:46.039 "data_size": 63488 00:15:46.039 }, 00:15:46.039 { 00:15:46.039 "name": "BaseBdev4", 00:15:46.039 "uuid": "e150da9d-9af2-51dd-a571-1ea479e17c64", 00:15:46.039 "is_configured": true, 00:15:46.039 "data_offset": 2048, 00:15:46.039 "data_size": 63488 00:15:46.039 } 00:15:46.039 ] 00:15:46.039 }' 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.039 "name": "raid_bdev1", 00:15:46.039 "uuid": "9f8b8b01-91ab-4778-8237-f65ba57134a0", 00:15:46.039 "strip_size_kb": 64, 00:15:46.039 "state": "online", 00:15:46.039 "raid_level": "raid5f", 00:15:46.039 "superblock": true, 00:15:46.039 "num_base_bdevs": 4, 00:15:46.039 "num_base_bdevs_discovered": 4, 00:15:46.039 "num_base_bdevs_operational": 4, 00:15:46.039 "base_bdevs_list": [ 00:15:46.039 { 00:15:46.039 "name": "spare", 00:15:46.039 "uuid": "f9c2fecd-83c8-5566-ac7f-0637362a1083", 00:15:46.039 "is_configured": true, 00:15:46.039 "data_offset": 2048, 00:15:46.039 "data_size": 63488 00:15:46.039 }, 00:15:46.039 { 00:15:46.039 "name": "BaseBdev2", 00:15:46.039 "uuid": "ba94fb99-1c2c-5b04-b718-c1c5001d5741", 00:15:46.039 "is_configured": true, 00:15:46.039 "data_offset": 2048, 00:15:46.039 "data_size": 63488 00:15:46.039 }, 00:15:46.039 { 00:15:46.039 "name": "BaseBdev3", 00:15:46.039 "uuid": "7f4af64a-13cb-5bab-8995-dedaced35e33", 00:15:46.039 "is_configured": true, 00:15:46.039 "data_offset": 2048, 00:15:46.039 "data_size": 63488 00:15:46.039 }, 00:15:46.039 { 00:15:46.039 "name": "BaseBdev4", 00:15:46.039 "uuid": "e150da9d-9af2-51dd-a571-1ea479e17c64", 00:15:46.039 "is_configured": true, 00:15:46.039 "data_offset": 2048, 00:15:46.039 "data_size": 63488 00:15:46.039 } 00:15:46.039 ] 00:15:46.039 }' 00:15:46.039 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.040 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.608 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:46.608 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.608 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.608 [2024-11-19 12:35:51.664914] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:46.608 [2024-11-19 12:35:51.664966] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:46.608 [2024-11-19 12:35:51.665066] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.608 [2024-11-19 12:35:51.665177] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:46.608 [2024-11-19 12:35:51.665202] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:46.608 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.608 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.608 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.608 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:46.608 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.608 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.608 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:46.608 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:46.608 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:46.608 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:46.608 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:46.608 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:46.608 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:46.608 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:46.608 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:46.609 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:46.609 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:46.609 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:46.609 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:46.868 /dev/nbd0 00:15:46.868 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:46.868 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:46.868 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:46.868 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:46.868 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:46.868 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:46.868 12:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:46.868 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:46.868 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:46.868 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:46.868 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:46.868 1+0 records in 00:15:46.868 1+0 records out 00:15:46.868 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450894 s, 9.1 MB/s 00:15:46.868 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.868 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:46.868 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.868 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:46.868 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:46.868 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:46.868 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:46.868 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:47.127 /dev/nbd1 00:15:47.127 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:47.127 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:47.127 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:47.127 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:47.128 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:47.128 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:47.128 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:47.128 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:47.128 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:47.128 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:47.128 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:47.128 1+0 records in 00:15:47.128 1+0 records out 00:15:47.128 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536622 s, 7.6 MB/s 00:15:47.128 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.128 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:47.128 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.128 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:47.128 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:47.128 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:47.128 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:47.128 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:47.128 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:47.128 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:47.128 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:47.128 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:47.128 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:47.128 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.128 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:47.387 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:47.387 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:47.387 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:47.387 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:47.387 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:47.387 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:47.387 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:47.387 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.387 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.387 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:47.647 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:47.647 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:47.647 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:47.647 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:47.647 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:47.647 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:47.647 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:47.647 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.647 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:47.647 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:47.647 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.647 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.647 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.647 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:47.647 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.647 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.647 [2024-11-19 12:35:52.832470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:47.647 [2024-11-19 12:35:52.832553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.647 [2024-11-19 12:35:52.832575] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:47.647 [2024-11-19 12:35:52.832586] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.647 [2024-11-19 12:35:52.834906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.647 [2024-11-19 12:35:52.834955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:47.647 [2024-11-19 12:35:52.835052] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:47.647 [2024-11-19 12:35:52.835094] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:47.647 [2024-11-19 12:35:52.835213] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:47.647 [2024-11-19 12:35:52.835315] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:47.647 [2024-11-19 12:35:52.835389] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:47.647 spare 00:15:47.647 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.647 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:47.647 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.647 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.906 [2024-11-19 12:35:52.935314] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:15:47.906 [2024-11-19 12:35:52.935364] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:47.906 [2024-11-19 12:35:52.935726] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049030 00:15:47.906 [2024-11-19 12:35:52.936318] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:15:47.906 [2024-11-19 12:35:52.936344] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:15:47.906 [2024-11-19 12:35:52.936547] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.906 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.906 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:47.906 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.906 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.906 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.906 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.906 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:47.906 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.906 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.906 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.906 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.906 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.906 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.906 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.906 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.906 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.906 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.906 "name": "raid_bdev1", 00:15:47.906 "uuid": "9f8b8b01-91ab-4778-8237-f65ba57134a0", 00:15:47.906 "strip_size_kb": 64, 00:15:47.906 "state": "online", 00:15:47.906 "raid_level": "raid5f", 00:15:47.906 "superblock": true, 00:15:47.906 "num_base_bdevs": 4, 00:15:47.906 "num_base_bdevs_discovered": 4, 00:15:47.906 "num_base_bdevs_operational": 4, 00:15:47.906 "base_bdevs_list": [ 00:15:47.906 { 00:15:47.906 "name": "spare", 00:15:47.906 "uuid": "f9c2fecd-83c8-5566-ac7f-0637362a1083", 00:15:47.906 "is_configured": true, 00:15:47.906 "data_offset": 2048, 00:15:47.906 "data_size": 63488 00:15:47.906 }, 00:15:47.906 { 00:15:47.906 "name": "BaseBdev2", 00:15:47.906 "uuid": "ba94fb99-1c2c-5b04-b718-c1c5001d5741", 00:15:47.906 "is_configured": true, 00:15:47.906 "data_offset": 2048, 00:15:47.906 "data_size": 63488 00:15:47.906 }, 00:15:47.906 { 00:15:47.906 "name": "BaseBdev3", 00:15:47.906 "uuid": "7f4af64a-13cb-5bab-8995-dedaced35e33", 00:15:47.906 "is_configured": true, 00:15:47.906 "data_offset": 2048, 00:15:47.906 "data_size": 63488 00:15:47.906 }, 00:15:47.906 { 00:15:47.906 "name": "BaseBdev4", 00:15:47.906 "uuid": "e150da9d-9af2-51dd-a571-1ea479e17c64", 00:15:47.906 "is_configured": true, 00:15:47.906 "data_offset": 2048, 00:15:47.906 "data_size": 63488 00:15:47.906 } 00:15:47.906 ] 00:15:47.906 }' 00:15:47.906 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.906 12:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.165 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:48.165 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.165 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:48.165 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:48.165 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.165 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.165 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.165 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.165 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.165 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.425 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.425 "name": "raid_bdev1", 00:15:48.425 "uuid": "9f8b8b01-91ab-4778-8237-f65ba57134a0", 00:15:48.425 "strip_size_kb": 64, 00:15:48.425 "state": "online", 00:15:48.425 "raid_level": "raid5f", 00:15:48.425 "superblock": true, 00:15:48.425 "num_base_bdevs": 4, 00:15:48.425 "num_base_bdevs_discovered": 4, 00:15:48.425 "num_base_bdevs_operational": 4, 00:15:48.425 "base_bdevs_list": [ 00:15:48.425 { 00:15:48.425 "name": "spare", 00:15:48.425 "uuid": "f9c2fecd-83c8-5566-ac7f-0637362a1083", 00:15:48.425 "is_configured": true, 00:15:48.425 "data_offset": 2048, 00:15:48.425 "data_size": 63488 00:15:48.425 }, 00:15:48.425 { 00:15:48.425 "name": "BaseBdev2", 00:15:48.425 "uuid": "ba94fb99-1c2c-5b04-b718-c1c5001d5741", 00:15:48.425 "is_configured": true, 00:15:48.425 "data_offset": 2048, 00:15:48.425 "data_size": 63488 00:15:48.425 }, 00:15:48.425 { 00:15:48.425 "name": "BaseBdev3", 00:15:48.425 "uuid": "7f4af64a-13cb-5bab-8995-dedaced35e33", 00:15:48.425 "is_configured": true, 00:15:48.425 "data_offset": 2048, 00:15:48.425 "data_size": 63488 00:15:48.425 }, 00:15:48.425 { 00:15:48.425 "name": "BaseBdev4", 00:15:48.425 "uuid": "e150da9d-9af2-51dd-a571-1ea479e17c64", 00:15:48.425 "is_configured": true, 00:15:48.425 "data_offset": 2048, 00:15:48.425 "data_size": 63488 00:15:48.425 } 00:15:48.425 ] 00:15:48.425 }' 00:15:48.425 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.425 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:48.425 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.425 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:48.425 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:48.425 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.425 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.425 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.425 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.425 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:48.425 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:48.425 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.425 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.425 [2024-11-19 12:35:53.583511] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:48.425 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.425 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:48.425 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.425 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.425 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.425 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.425 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.425 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.425 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.425 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.425 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.425 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.425 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.425 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.425 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.425 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.425 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.425 "name": "raid_bdev1", 00:15:48.425 "uuid": "9f8b8b01-91ab-4778-8237-f65ba57134a0", 00:15:48.425 "strip_size_kb": 64, 00:15:48.425 "state": "online", 00:15:48.425 "raid_level": "raid5f", 00:15:48.426 "superblock": true, 00:15:48.426 "num_base_bdevs": 4, 00:15:48.426 "num_base_bdevs_discovered": 3, 00:15:48.426 "num_base_bdevs_operational": 3, 00:15:48.426 "base_bdevs_list": [ 00:15:48.426 { 00:15:48.426 "name": null, 00:15:48.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.426 "is_configured": false, 00:15:48.426 "data_offset": 0, 00:15:48.426 "data_size": 63488 00:15:48.426 }, 00:15:48.426 { 00:15:48.426 "name": "BaseBdev2", 00:15:48.426 "uuid": "ba94fb99-1c2c-5b04-b718-c1c5001d5741", 00:15:48.426 "is_configured": true, 00:15:48.426 "data_offset": 2048, 00:15:48.426 "data_size": 63488 00:15:48.426 }, 00:15:48.426 { 00:15:48.426 "name": "BaseBdev3", 00:15:48.426 "uuid": "7f4af64a-13cb-5bab-8995-dedaced35e33", 00:15:48.426 "is_configured": true, 00:15:48.426 "data_offset": 2048, 00:15:48.426 "data_size": 63488 00:15:48.426 }, 00:15:48.426 { 00:15:48.426 "name": "BaseBdev4", 00:15:48.426 "uuid": "e150da9d-9af2-51dd-a571-1ea479e17c64", 00:15:48.426 "is_configured": true, 00:15:48.426 "data_offset": 2048, 00:15:48.426 "data_size": 63488 00:15:48.426 } 00:15:48.426 ] 00:15:48.426 }' 00:15:48.426 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.426 12:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.994 12:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:48.994 12:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.994 12:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.994 [2024-11-19 12:35:54.062875] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:48.994 [2024-11-19 12:35:54.063144] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:48.994 [2024-11-19 12:35:54.063169] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:48.994 [2024-11-19 12:35:54.063216] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:48.994 [2024-11-19 12:35:54.066383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049100 00:15:48.994 [2024-11-19 12:35:54.068737] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:48.994 12:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.994 12:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:49.931 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:49.931 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.931 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:49.931 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:49.931 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.931 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.931 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.931 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.931 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.931 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.931 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.931 "name": "raid_bdev1", 00:15:49.931 "uuid": "9f8b8b01-91ab-4778-8237-f65ba57134a0", 00:15:49.931 "strip_size_kb": 64, 00:15:49.931 "state": "online", 00:15:49.931 "raid_level": "raid5f", 00:15:49.931 "superblock": true, 00:15:49.931 "num_base_bdevs": 4, 00:15:49.931 "num_base_bdevs_discovered": 4, 00:15:49.931 "num_base_bdevs_operational": 4, 00:15:49.931 "process": { 00:15:49.931 "type": "rebuild", 00:15:49.931 "target": "spare", 00:15:49.931 "progress": { 00:15:49.931 "blocks": 19200, 00:15:49.931 "percent": 10 00:15:49.931 } 00:15:49.931 }, 00:15:49.931 "base_bdevs_list": [ 00:15:49.931 { 00:15:49.931 "name": "spare", 00:15:49.931 "uuid": "f9c2fecd-83c8-5566-ac7f-0637362a1083", 00:15:49.931 "is_configured": true, 00:15:49.931 "data_offset": 2048, 00:15:49.931 "data_size": 63488 00:15:49.931 }, 00:15:49.931 { 00:15:49.931 "name": "BaseBdev2", 00:15:49.931 "uuid": "ba94fb99-1c2c-5b04-b718-c1c5001d5741", 00:15:49.931 "is_configured": true, 00:15:49.931 "data_offset": 2048, 00:15:49.931 "data_size": 63488 00:15:49.931 }, 00:15:49.931 { 00:15:49.931 "name": "BaseBdev3", 00:15:49.931 "uuid": "7f4af64a-13cb-5bab-8995-dedaced35e33", 00:15:49.931 "is_configured": true, 00:15:49.931 "data_offset": 2048, 00:15:49.931 "data_size": 63488 00:15:49.931 }, 00:15:49.931 { 00:15:49.931 "name": "BaseBdev4", 00:15:49.931 "uuid": "e150da9d-9af2-51dd-a571-1ea479e17c64", 00:15:49.931 "is_configured": true, 00:15:49.931 "data_offset": 2048, 00:15:49.931 "data_size": 63488 00:15:49.931 } 00:15:49.931 ] 00:15:49.931 }' 00:15:49.931 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.931 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:49.931 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.190 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:50.190 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:50.190 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.190 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.190 [2024-11-19 12:35:55.235575] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:50.190 [2024-11-19 12:35:55.277043] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:50.190 [2024-11-19 12:35:55.277130] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.190 [2024-11-19 12:35:55.277152] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:50.190 [2024-11-19 12:35:55.277159] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:50.190 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.190 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:50.190 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.190 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.190 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.190 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.190 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.190 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.190 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.190 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.190 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.190 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.190 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.190 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.191 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.191 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.191 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.191 "name": "raid_bdev1", 00:15:50.191 "uuid": "9f8b8b01-91ab-4778-8237-f65ba57134a0", 00:15:50.191 "strip_size_kb": 64, 00:15:50.191 "state": "online", 00:15:50.191 "raid_level": "raid5f", 00:15:50.191 "superblock": true, 00:15:50.191 "num_base_bdevs": 4, 00:15:50.191 "num_base_bdevs_discovered": 3, 00:15:50.191 "num_base_bdevs_operational": 3, 00:15:50.191 "base_bdevs_list": [ 00:15:50.191 { 00:15:50.191 "name": null, 00:15:50.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.191 "is_configured": false, 00:15:50.191 "data_offset": 0, 00:15:50.191 "data_size": 63488 00:15:50.191 }, 00:15:50.191 { 00:15:50.191 "name": "BaseBdev2", 00:15:50.191 "uuid": "ba94fb99-1c2c-5b04-b718-c1c5001d5741", 00:15:50.191 "is_configured": true, 00:15:50.191 "data_offset": 2048, 00:15:50.191 "data_size": 63488 00:15:50.191 }, 00:15:50.191 { 00:15:50.191 "name": "BaseBdev3", 00:15:50.191 "uuid": "7f4af64a-13cb-5bab-8995-dedaced35e33", 00:15:50.191 "is_configured": true, 00:15:50.191 "data_offset": 2048, 00:15:50.191 "data_size": 63488 00:15:50.191 }, 00:15:50.191 { 00:15:50.191 "name": "BaseBdev4", 00:15:50.191 "uuid": "e150da9d-9af2-51dd-a571-1ea479e17c64", 00:15:50.191 "is_configured": true, 00:15:50.191 "data_offset": 2048, 00:15:50.191 "data_size": 63488 00:15:50.191 } 00:15:50.191 ] 00:15:50.191 }' 00:15:50.191 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.191 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.758 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:50.758 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.758 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.758 [2024-11-19 12:35:55.745520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:50.758 [2024-11-19 12:35:55.745609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.758 [2024-11-19 12:35:55.745640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:15:50.758 [2024-11-19 12:35:55.745649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.758 [2024-11-19 12:35:55.746136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.758 [2024-11-19 12:35:55.746165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:50.758 [2024-11-19 12:35:55.746263] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:50.759 [2024-11-19 12:35:55.746278] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:50.759 [2024-11-19 12:35:55.746294] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:50.759 [2024-11-19 12:35:55.746318] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:50.759 spare 00:15:50.759 [2024-11-19 12:35:55.749571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:15:50.759 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.759 12:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:50.759 [2024-11-19 12:35:55.751870] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:51.697 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:51.697 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.697 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:51.697 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:51.697 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.697 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.697 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.697 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.697 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.697 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.697 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.697 "name": "raid_bdev1", 00:15:51.697 "uuid": "9f8b8b01-91ab-4778-8237-f65ba57134a0", 00:15:51.697 "strip_size_kb": 64, 00:15:51.697 "state": "online", 00:15:51.697 "raid_level": "raid5f", 00:15:51.697 "superblock": true, 00:15:51.697 "num_base_bdevs": 4, 00:15:51.697 "num_base_bdevs_discovered": 4, 00:15:51.697 "num_base_bdevs_operational": 4, 00:15:51.697 "process": { 00:15:51.697 "type": "rebuild", 00:15:51.697 "target": "spare", 00:15:51.697 "progress": { 00:15:51.697 "blocks": 19200, 00:15:51.697 "percent": 10 00:15:51.697 } 00:15:51.697 }, 00:15:51.697 "base_bdevs_list": [ 00:15:51.697 { 00:15:51.697 "name": "spare", 00:15:51.697 "uuid": "f9c2fecd-83c8-5566-ac7f-0637362a1083", 00:15:51.697 "is_configured": true, 00:15:51.697 "data_offset": 2048, 00:15:51.697 "data_size": 63488 00:15:51.697 }, 00:15:51.697 { 00:15:51.697 "name": "BaseBdev2", 00:15:51.697 "uuid": "ba94fb99-1c2c-5b04-b718-c1c5001d5741", 00:15:51.697 "is_configured": true, 00:15:51.697 "data_offset": 2048, 00:15:51.697 "data_size": 63488 00:15:51.697 }, 00:15:51.697 { 00:15:51.697 "name": "BaseBdev3", 00:15:51.697 "uuid": "7f4af64a-13cb-5bab-8995-dedaced35e33", 00:15:51.697 "is_configured": true, 00:15:51.697 "data_offset": 2048, 00:15:51.697 "data_size": 63488 00:15:51.697 }, 00:15:51.697 { 00:15:51.697 "name": "BaseBdev4", 00:15:51.697 "uuid": "e150da9d-9af2-51dd-a571-1ea479e17c64", 00:15:51.697 "is_configured": true, 00:15:51.697 "data_offset": 2048, 00:15:51.697 "data_size": 63488 00:15:51.697 } 00:15:51.697 ] 00:15:51.697 }' 00:15:51.697 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.697 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:51.697 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.697 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:51.697 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:51.697 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.697 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.697 [2024-11-19 12:35:56.908268] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:51.981 [2024-11-19 12:35:56.960289] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:51.981 [2024-11-19 12:35:56.960402] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.981 [2024-11-19 12:35:56.960423] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:51.981 [2024-11-19 12:35:56.960436] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:51.981 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.981 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:51.981 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.981 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.981 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.981 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.981 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.981 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.981 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.981 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.981 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.981 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.981 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.981 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.981 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.981 12:35:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.981 12:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.981 "name": "raid_bdev1", 00:15:51.981 "uuid": "9f8b8b01-91ab-4778-8237-f65ba57134a0", 00:15:51.981 "strip_size_kb": 64, 00:15:51.981 "state": "online", 00:15:51.981 "raid_level": "raid5f", 00:15:51.981 "superblock": true, 00:15:51.981 "num_base_bdevs": 4, 00:15:51.981 "num_base_bdevs_discovered": 3, 00:15:51.981 "num_base_bdevs_operational": 3, 00:15:51.981 "base_bdevs_list": [ 00:15:51.981 { 00:15:51.981 "name": null, 00:15:51.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.981 "is_configured": false, 00:15:51.981 "data_offset": 0, 00:15:51.981 "data_size": 63488 00:15:51.981 }, 00:15:51.981 { 00:15:51.981 "name": "BaseBdev2", 00:15:51.981 "uuid": "ba94fb99-1c2c-5b04-b718-c1c5001d5741", 00:15:51.981 "is_configured": true, 00:15:51.981 "data_offset": 2048, 00:15:51.981 "data_size": 63488 00:15:51.981 }, 00:15:51.981 { 00:15:51.981 "name": "BaseBdev3", 00:15:51.981 "uuid": "7f4af64a-13cb-5bab-8995-dedaced35e33", 00:15:51.981 "is_configured": true, 00:15:51.981 "data_offset": 2048, 00:15:51.981 "data_size": 63488 00:15:51.981 }, 00:15:51.981 { 00:15:51.981 "name": "BaseBdev4", 00:15:51.981 "uuid": "e150da9d-9af2-51dd-a571-1ea479e17c64", 00:15:51.981 "is_configured": true, 00:15:51.981 "data_offset": 2048, 00:15:51.981 "data_size": 63488 00:15:51.981 } 00:15:51.981 ] 00:15:51.981 }' 00:15:51.981 12:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.982 12:35:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.252 12:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:52.252 12:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.252 12:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:52.252 12:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:52.252 12:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.252 12:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.252 12:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.252 12:35:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.252 12:35:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.252 12:35:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.252 12:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.252 "name": "raid_bdev1", 00:15:52.252 "uuid": "9f8b8b01-91ab-4778-8237-f65ba57134a0", 00:15:52.252 "strip_size_kb": 64, 00:15:52.252 "state": "online", 00:15:52.252 "raid_level": "raid5f", 00:15:52.252 "superblock": true, 00:15:52.252 "num_base_bdevs": 4, 00:15:52.252 "num_base_bdevs_discovered": 3, 00:15:52.252 "num_base_bdevs_operational": 3, 00:15:52.252 "base_bdevs_list": [ 00:15:52.252 { 00:15:52.252 "name": null, 00:15:52.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.252 "is_configured": false, 00:15:52.252 "data_offset": 0, 00:15:52.252 "data_size": 63488 00:15:52.252 }, 00:15:52.252 { 00:15:52.252 "name": "BaseBdev2", 00:15:52.252 "uuid": "ba94fb99-1c2c-5b04-b718-c1c5001d5741", 00:15:52.252 "is_configured": true, 00:15:52.252 "data_offset": 2048, 00:15:52.252 "data_size": 63488 00:15:52.252 }, 00:15:52.252 { 00:15:52.252 "name": "BaseBdev3", 00:15:52.252 "uuid": "7f4af64a-13cb-5bab-8995-dedaced35e33", 00:15:52.252 "is_configured": true, 00:15:52.252 "data_offset": 2048, 00:15:52.252 "data_size": 63488 00:15:52.252 }, 00:15:52.252 { 00:15:52.252 "name": "BaseBdev4", 00:15:52.253 "uuid": "e150da9d-9af2-51dd-a571-1ea479e17c64", 00:15:52.253 "is_configured": true, 00:15:52.253 "data_offset": 2048, 00:15:52.253 "data_size": 63488 00:15:52.253 } 00:15:52.253 ] 00:15:52.253 }' 00:15:52.253 12:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.253 12:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:52.253 12:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.512 12:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:52.512 12:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:52.512 12:35:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.512 12:35:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.512 12:35:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.512 12:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:52.512 12:35:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.512 12:35:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.512 [2024-11-19 12:35:57.552787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:52.512 [2024-11-19 12:35:57.552868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.512 [2024-11-19 12:35:57.552890] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:15:52.512 [2024-11-19 12:35:57.552901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.512 [2024-11-19 12:35:57.553369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.512 [2024-11-19 12:35:57.553405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:52.512 [2024-11-19 12:35:57.553486] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:52.512 [2024-11-19 12:35:57.553508] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:52.512 [2024-11-19 12:35:57.553517] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:52.512 [2024-11-19 12:35:57.553530] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:52.512 BaseBdev1 00:15:52.512 12:35:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.512 12:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:53.448 12:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:53.448 12:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.448 12:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.448 12:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.448 12:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.448 12:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:53.448 12:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.448 12:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.448 12:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.448 12:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.448 12:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.448 12:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.448 12:35:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.448 12:35:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.448 12:35:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.448 12:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.448 "name": "raid_bdev1", 00:15:53.448 "uuid": "9f8b8b01-91ab-4778-8237-f65ba57134a0", 00:15:53.448 "strip_size_kb": 64, 00:15:53.448 "state": "online", 00:15:53.448 "raid_level": "raid5f", 00:15:53.448 "superblock": true, 00:15:53.448 "num_base_bdevs": 4, 00:15:53.448 "num_base_bdevs_discovered": 3, 00:15:53.448 "num_base_bdevs_operational": 3, 00:15:53.448 "base_bdevs_list": [ 00:15:53.448 { 00:15:53.448 "name": null, 00:15:53.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.448 "is_configured": false, 00:15:53.448 "data_offset": 0, 00:15:53.448 "data_size": 63488 00:15:53.448 }, 00:15:53.448 { 00:15:53.448 "name": "BaseBdev2", 00:15:53.448 "uuid": "ba94fb99-1c2c-5b04-b718-c1c5001d5741", 00:15:53.448 "is_configured": true, 00:15:53.448 "data_offset": 2048, 00:15:53.448 "data_size": 63488 00:15:53.448 }, 00:15:53.448 { 00:15:53.449 "name": "BaseBdev3", 00:15:53.449 "uuid": "7f4af64a-13cb-5bab-8995-dedaced35e33", 00:15:53.449 "is_configured": true, 00:15:53.449 "data_offset": 2048, 00:15:53.449 "data_size": 63488 00:15:53.449 }, 00:15:53.449 { 00:15:53.449 "name": "BaseBdev4", 00:15:53.449 "uuid": "e150da9d-9af2-51dd-a571-1ea479e17c64", 00:15:53.449 "is_configured": true, 00:15:53.449 "data_offset": 2048, 00:15:53.449 "data_size": 63488 00:15:53.449 } 00:15:53.449 ] 00:15:53.449 }' 00:15:53.449 12:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.449 12:35:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.016 12:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:54.016 12:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.016 12:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:54.016 12:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:54.016 12:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.016 12:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.016 12:35:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.016 12:35:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.017 12:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.017 12:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.017 12:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.017 "name": "raid_bdev1", 00:15:54.017 "uuid": "9f8b8b01-91ab-4778-8237-f65ba57134a0", 00:15:54.017 "strip_size_kb": 64, 00:15:54.017 "state": "online", 00:15:54.017 "raid_level": "raid5f", 00:15:54.017 "superblock": true, 00:15:54.017 "num_base_bdevs": 4, 00:15:54.017 "num_base_bdevs_discovered": 3, 00:15:54.017 "num_base_bdevs_operational": 3, 00:15:54.017 "base_bdevs_list": [ 00:15:54.017 { 00:15:54.017 "name": null, 00:15:54.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.017 "is_configured": false, 00:15:54.017 "data_offset": 0, 00:15:54.017 "data_size": 63488 00:15:54.017 }, 00:15:54.017 { 00:15:54.017 "name": "BaseBdev2", 00:15:54.017 "uuid": "ba94fb99-1c2c-5b04-b718-c1c5001d5741", 00:15:54.017 "is_configured": true, 00:15:54.017 "data_offset": 2048, 00:15:54.017 "data_size": 63488 00:15:54.017 }, 00:15:54.017 { 00:15:54.017 "name": "BaseBdev3", 00:15:54.017 "uuid": "7f4af64a-13cb-5bab-8995-dedaced35e33", 00:15:54.017 "is_configured": true, 00:15:54.017 "data_offset": 2048, 00:15:54.017 "data_size": 63488 00:15:54.017 }, 00:15:54.017 { 00:15:54.017 "name": "BaseBdev4", 00:15:54.017 "uuid": "e150da9d-9af2-51dd-a571-1ea479e17c64", 00:15:54.017 "is_configured": true, 00:15:54.017 "data_offset": 2048, 00:15:54.017 "data_size": 63488 00:15:54.017 } 00:15:54.017 ] 00:15:54.017 }' 00:15:54.017 12:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.017 12:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:54.017 12:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.017 12:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:54.017 12:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:54.017 12:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:15:54.017 12:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:54.017 12:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:54.017 12:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:54.017 12:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:54.017 12:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:54.017 12:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:54.017 12:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.017 12:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.017 [2024-11-19 12:35:59.130211] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:54.017 [2024-11-19 12:35:59.130409] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:54.017 [2024-11-19 12:35:59.130433] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:54.017 request: 00:15:54.017 { 00:15:54.017 "base_bdev": "BaseBdev1", 00:15:54.017 "raid_bdev": "raid_bdev1", 00:15:54.017 "method": "bdev_raid_add_base_bdev", 00:15:54.017 "req_id": 1 00:15:54.017 } 00:15:54.017 Got JSON-RPC error response 00:15:54.017 response: 00:15:54.017 { 00:15:54.017 "code": -22, 00:15:54.017 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:54.017 } 00:15:54.017 12:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:54.017 12:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:15:54.017 12:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:54.017 12:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:54.017 12:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:54.017 12:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:54.953 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:54.953 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.953 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.953 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.953 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.953 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.953 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.953 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.953 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.953 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.953 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.953 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.953 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.953 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.953 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.953 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.953 "name": "raid_bdev1", 00:15:54.953 "uuid": "9f8b8b01-91ab-4778-8237-f65ba57134a0", 00:15:54.953 "strip_size_kb": 64, 00:15:54.953 "state": "online", 00:15:54.953 "raid_level": "raid5f", 00:15:54.953 "superblock": true, 00:15:54.953 "num_base_bdevs": 4, 00:15:54.953 "num_base_bdevs_discovered": 3, 00:15:54.953 "num_base_bdevs_operational": 3, 00:15:54.953 "base_bdevs_list": [ 00:15:54.953 { 00:15:54.953 "name": null, 00:15:54.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.953 "is_configured": false, 00:15:54.953 "data_offset": 0, 00:15:54.953 "data_size": 63488 00:15:54.953 }, 00:15:54.953 { 00:15:54.953 "name": "BaseBdev2", 00:15:54.953 "uuid": "ba94fb99-1c2c-5b04-b718-c1c5001d5741", 00:15:54.953 "is_configured": true, 00:15:54.953 "data_offset": 2048, 00:15:54.953 "data_size": 63488 00:15:54.953 }, 00:15:54.953 { 00:15:54.953 "name": "BaseBdev3", 00:15:54.953 "uuid": "7f4af64a-13cb-5bab-8995-dedaced35e33", 00:15:54.953 "is_configured": true, 00:15:54.953 "data_offset": 2048, 00:15:54.953 "data_size": 63488 00:15:54.953 }, 00:15:54.953 { 00:15:54.953 "name": "BaseBdev4", 00:15:54.953 "uuid": "e150da9d-9af2-51dd-a571-1ea479e17c64", 00:15:54.953 "is_configured": true, 00:15:54.953 "data_offset": 2048, 00:15:54.953 "data_size": 63488 00:15:54.953 } 00:15:54.953 ] 00:15:54.953 }' 00:15:54.953 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.953 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.522 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:55.522 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.522 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:55.522 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:55.522 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.522 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.522 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.522 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.522 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.522 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.522 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.522 "name": "raid_bdev1", 00:15:55.522 "uuid": "9f8b8b01-91ab-4778-8237-f65ba57134a0", 00:15:55.522 "strip_size_kb": 64, 00:15:55.522 "state": "online", 00:15:55.522 "raid_level": "raid5f", 00:15:55.522 "superblock": true, 00:15:55.522 "num_base_bdevs": 4, 00:15:55.522 "num_base_bdevs_discovered": 3, 00:15:55.522 "num_base_bdevs_operational": 3, 00:15:55.522 "base_bdevs_list": [ 00:15:55.522 { 00:15:55.522 "name": null, 00:15:55.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.522 "is_configured": false, 00:15:55.522 "data_offset": 0, 00:15:55.522 "data_size": 63488 00:15:55.522 }, 00:15:55.522 { 00:15:55.522 "name": "BaseBdev2", 00:15:55.522 "uuid": "ba94fb99-1c2c-5b04-b718-c1c5001d5741", 00:15:55.522 "is_configured": true, 00:15:55.522 "data_offset": 2048, 00:15:55.522 "data_size": 63488 00:15:55.522 }, 00:15:55.522 { 00:15:55.522 "name": "BaseBdev3", 00:15:55.522 "uuid": "7f4af64a-13cb-5bab-8995-dedaced35e33", 00:15:55.522 "is_configured": true, 00:15:55.522 "data_offset": 2048, 00:15:55.522 "data_size": 63488 00:15:55.522 }, 00:15:55.522 { 00:15:55.522 "name": "BaseBdev4", 00:15:55.522 "uuid": "e150da9d-9af2-51dd-a571-1ea479e17c64", 00:15:55.522 "is_configured": true, 00:15:55.522 "data_offset": 2048, 00:15:55.522 "data_size": 63488 00:15:55.522 } 00:15:55.522 ] 00:15:55.522 }' 00:15:55.522 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.522 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:55.522 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.522 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:55.522 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 95728 00:15:55.522 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 95728 ']' 00:15:55.522 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 95728 00:15:55.522 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:55.522 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:55.522 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95728 00:15:55.522 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:55.522 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:55.522 killing process with pid 95728 00:15:55.522 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95728' 00:15:55.522 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 95728 00:15:55.522 Received shutdown signal, test time was about 60.000000 seconds 00:15:55.522 00:15:55.522 Latency(us) 00:15:55.522 [2024-11-19T12:36:00.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:55.523 [2024-11-19T12:36:00.784Z] =================================================================================================================== 00:15:55.523 [2024-11-19T12:36:00.784Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:55.523 [2024-11-19 12:36:00.772571] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:55.523 [2024-11-19 12:36:00.772717] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:55.523 12:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 95728 00:15:55.523 [2024-11-19 12:36:00.772822] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:55.523 [2024-11-19 12:36:00.772834] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:15:55.782 [2024-11-19 12:36:00.824989] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:56.040 12:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:56.040 00:15:56.041 real 0m25.331s 00:15:56.041 user 0m32.164s 00:15:56.041 sys 0m3.135s 00:15:56.041 12:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:56.041 12:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.041 ************************************ 00:15:56.041 END TEST raid5f_rebuild_test_sb 00:15:56.041 ************************************ 00:15:56.041 12:36:01 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:15:56.041 12:36:01 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:15:56.041 12:36:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:56.041 12:36:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:56.041 12:36:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:56.041 ************************************ 00:15:56.041 START TEST raid_state_function_test_sb_4k 00:15:56.041 ************************************ 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=96526 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:56.041 Process raid pid: 96526 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 96526' 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 96526 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 96526 ']' 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:56.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:56.041 12:36:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.041 [2024-11-19 12:36:01.237126] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:56.041 [2024-11-19 12:36:01.237318] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.300 [2024-11-19 12:36:01.407317] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.300 [2024-11-19 12:36:01.461129] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.300 [2024-11-19 12:36:01.503376] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:56.300 [2024-11-19 12:36:01.503417] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:57.237 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:57.237 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:15:57.237 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:57.237 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.237 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.237 [2024-11-19 12:36:02.133091] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:57.237 [2024-11-19 12:36:02.133167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:57.237 [2024-11-19 12:36:02.133179] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:57.237 [2024-11-19 12:36:02.133189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:57.237 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.237 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:57.237 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.237 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.237 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:57.237 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:57.237 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:57.237 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.237 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.237 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.237 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.237 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.237 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.237 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.237 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.237 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.237 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.237 "name": "Existed_Raid", 00:15:57.237 "uuid": "5dcaf3ca-e8e6-4d69-bdb7-2473122ecce4", 00:15:57.237 "strip_size_kb": 0, 00:15:57.237 "state": "configuring", 00:15:57.237 "raid_level": "raid1", 00:15:57.237 "superblock": true, 00:15:57.237 "num_base_bdevs": 2, 00:15:57.237 "num_base_bdevs_discovered": 0, 00:15:57.237 "num_base_bdevs_operational": 2, 00:15:57.237 "base_bdevs_list": [ 00:15:57.237 { 00:15:57.237 "name": "BaseBdev1", 00:15:57.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.237 "is_configured": false, 00:15:57.237 "data_offset": 0, 00:15:57.237 "data_size": 0 00:15:57.237 }, 00:15:57.237 { 00:15:57.237 "name": "BaseBdev2", 00:15:57.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.237 "is_configured": false, 00:15:57.237 "data_offset": 0, 00:15:57.237 "data_size": 0 00:15:57.237 } 00:15:57.238 ] 00:15:57.238 }' 00:15:57.238 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.238 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.497 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:57.497 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.497 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.497 [2024-11-19 12:36:02.588233] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:57.497 [2024-11-19 12:36:02.588303] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:15:57.497 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.497 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:57.497 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.497 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.497 [2024-11-19 12:36:02.600258] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:57.497 [2024-11-19 12:36:02.600315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:57.497 [2024-11-19 12:36:02.600324] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:57.497 [2024-11-19 12:36:02.600333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:57.497 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.497 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:15:57.497 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.497 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.497 [2024-11-19 12:36:02.621107] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:57.497 BaseBdev1 00:15:57.497 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.497 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:57.497 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:57.497 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:57.497 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:15:57.497 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:57.497 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:57.497 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:57.497 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.497 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.497 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.497 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:57.497 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.497 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.497 [ 00:15:57.497 { 00:15:57.497 "name": "BaseBdev1", 00:15:57.497 "aliases": [ 00:15:57.497 "19e98848-0644-4901-86d8-556b816bafeb" 00:15:57.497 ], 00:15:57.497 "product_name": "Malloc disk", 00:15:57.497 "block_size": 4096, 00:15:57.497 "num_blocks": 8192, 00:15:57.497 "uuid": "19e98848-0644-4901-86d8-556b816bafeb", 00:15:57.497 "assigned_rate_limits": { 00:15:57.497 "rw_ios_per_sec": 0, 00:15:57.497 "rw_mbytes_per_sec": 0, 00:15:57.497 "r_mbytes_per_sec": 0, 00:15:57.497 "w_mbytes_per_sec": 0 00:15:57.497 }, 00:15:57.497 "claimed": true, 00:15:57.497 "claim_type": "exclusive_write", 00:15:57.497 "zoned": false, 00:15:57.497 "supported_io_types": { 00:15:57.497 "read": true, 00:15:57.497 "write": true, 00:15:57.497 "unmap": true, 00:15:57.497 "flush": true, 00:15:57.497 "reset": true, 00:15:57.497 "nvme_admin": false, 00:15:57.497 "nvme_io": false, 00:15:57.497 "nvme_io_md": false, 00:15:57.497 "write_zeroes": true, 00:15:57.497 "zcopy": true, 00:15:57.497 "get_zone_info": false, 00:15:57.497 "zone_management": false, 00:15:57.497 "zone_append": false, 00:15:57.497 "compare": false, 00:15:57.497 "compare_and_write": false, 00:15:57.497 "abort": true, 00:15:57.497 "seek_hole": false, 00:15:57.497 "seek_data": false, 00:15:57.497 "copy": true, 00:15:57.497 "nvme_iov_md": false 00:15:57.497 }, 00:15:57.497 "memory_domains": [ 00:15:57.497 { 00:15:57.497 "dma_device_id": "system", 00:15:57.497 "dma_device_type": 1 00:15:57.497 }, 00:15:57.497 { 00:15:57.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.497 "dma_device_type": 2 00:15:57.497 } 00:15:57.497 ], 00:15:57.497 "driver_specific": {} 00:15:57.497 } 00:15:57.497 ] 00:15:57.497 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.497 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:15:57.497 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:57.497 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.497 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.498 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:57.498 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:57.498 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:57.498 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.498 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.498 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.498 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.498 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.498 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.498 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.498 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.498 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.498 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.498 "name": "Existed_Raid", 00:15:57.498 "uuid": "2c317b32-6565-4e8b-bede-cb13d00b7d33", 00:15:57.498 "strip_size_kb": 0, 00:15:57.498 "state": "configuring", 00:15:57.498 "raid_level": "raid1", 00:15:57.498 "superblock": true, 00:15:57.498 "num_base_bdevs": 2, 00:15:57.498 "num_base_bdevs_discovered": 1, 00:15:57.498 "num_base_bdevs_operational": 2, 00:15:57.498 "base_bdevs_list": [ 00:15:57.498 { 00:15:57.498 "name": "BaseBdev1", 00:15:57.498 "uuid": "19e98848-0644-4901-86d8-556b816bafeb", 00:15:57.498 "is_configured": true, 00:15:57.498 "data_offset": 256, 00:15:57.498 "data_size": 7936 00:15:57.498 }, 00:15:57.498 { 00:15:57.498 "name": "BaseBdev2", 00:15:57.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.498 "is_configured": false, 00:15:57.498 "data_offset": 0, 00:15:57.498 "data_size": 0 00:15:57.498 } 00:15:57.498 ] 00:15:57.498 }' 00:15:57.498 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.498 12:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.067 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:58.067 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.067 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.067 [2024-11-19 12:36:03.104402] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:58.067 [2024-11-19 12:36:03.104472] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:15:58.067 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.067 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:58.067 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.067 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.067 [2024-11-19 12:36:03.116420] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:58.067 [2024-11-19 12:36:03.118298] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:58.067 [2024-11-19 12:36:03.118347] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:58.067 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.067 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:58.067 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:58.067 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:58.067 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.067 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.067 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:58.067 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:58.067 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:58.067 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.067 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.067 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.067 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.067 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.067 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.067 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.067 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.067 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.067 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.067 "name": "Existed_Raid", 00:15:58.067 "uuid": "ee14a15b-f803-48c7-9dfb-cda609dda3f1", 00:15:58.067 "strip_size_kb": 0, 00:15:58.067 "state": "configuring", 00:15:58.067 "raid_level": "raid1", 00:15:58.067 "superblock": true, 00:15:58.067 "num_base_bdevs": 2, 00:15:58.068 "num_base_bdevs_discovered": 1, 00:15:58.068 "num_base_bdevs_operational": 2, 00:15:58.068 "base_bdevs_list": [ 00:15:58.068 { 00:15:58.068 "name": "BaseBdev1", 00:15:58.068 "uuid": "19e98848-0644-4901-86d8-556b816bafeb", 00:15:58.068 "is_configured": true, 00:15:58.068 "data_offset": 256, 00:15:58.068 "data_size": 7936 00:15:58.068 }, 00:15:58.068 { 00:15:58.068 "name": "BaseBdev2", 00:15:58.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.068 "is_configured": false, 00:15:58.068 "data_offset": 0, 00:15:58.068 "data_size": 0 00:15:58.068 } 00:15:58.068 ] 00:15:58.068 }' 00:15:58.068 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.068 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.328 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:15:58.328 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.328 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.328 [2024-11-19 12:36:03.544297] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:58.328 [2024-11-19 12:36:03.544551] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:58.328 [2024-11-19 12:36:03.544570] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:58.328 [2024-11-19 12:36:03.544905] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:58.328 [2024-11-19 12:36:03.545085] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:58.328 [2024-11-19 12:36:03.545122] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:15:58.328 BaseBdev2 00:15:58.328 [2024-11-19 12:36:03.545267] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.328 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.328 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:58.328 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:58.328 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:58.328 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:15:58.328 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:58.328 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:58.328 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:58.328 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.328 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.328 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.328 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:58.328 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.328 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.328 [ 00:15:58.328 { 00:15:58.328 "name": "BaseBdev2", 00:15:58.328 "aliases": [ 00:15:58.328 "2b0d417a-07fa-4c8b-8f99-0d43a0f559c7" 00:15:58.328 ], 00:15:58.328 "product_name": "Malloc disk", 00:15:58.328 "block_size": 4096, 00:15:58.328 "num_blocks": 8192, 00:15:58.328 "uuid": "2b0d417a-07fa-4c8b-8f99-0d43a0f559c7", 00:15:58.328 "assigned_rate_limits": { 00:15:58.328 "rw_ios_per_sec": 0, 00:15:58.328 "rw_mbytes_per_sec": 0, 00:15:58.328 "r_mbytes_per_sec": 0, 00:15:58.328 "w_mbytes_per_sec": 0 00:15:58.328 }, 00:15:58.328 "claimed": true, 00:15:58.328 "claim_type": "exclusive_write", 00:15:58.328 "zoned": false, 00:15:58.328 "supported_io_types": { 00:15:58.328 "read": true, 00:15:58.328 "write": true, 00:15:58.328 "unmap": true, 00:15:58.328 "flush": true, 00:15:58.328 "reset": true, 00:15:58.328 "nvme_admin": false, 00:15:58.328 "nvme_io": false, 00:15:58.328 "nvme_io_md": false, 00:15:58.328 "write_zeroes": true, 00:15:58.328 "zcopy": true, 00:15:58.328 "get_zone_info": false, 00:15:58.328 "zone_management": false, 00:15:58.328 "zone_append": false, 00:15:58.328 "compare": false, 00:15:58.328 "compare_and_write": false, 00:15:58.328 "abort": true, 00:15:58.328 "seek_hole": false, 00:15:58.328 "seek_data": false, 00:15:58.328 "copy": true, 00:15:58.328 "nvme_iov_md": false 00:15:58.328 }, 00:15:58.328 "memory_domains": [ 00:15:58.328 { 00:15:58.328 "dma_device_id": "system", 00:15:58.328 "dma_device_type": 1 00:15:58.328 }, 00:15:58.328 { 00:15:58.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.328 "dma_device_type": 2 00:15:58.328 } 00:15:58.328 ], 00:15:58.328 "driver_specific": {} 00:15:58.328 } 00:15:58.328 ] 00:15:58.328 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.328 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:15:58.328 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:58.328 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:58.328 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:58.328 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.328 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.328 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:58.328 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:58.328 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:58.328 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.328 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.329 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.329 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.588 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.588 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.588 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.588 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.588 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.588 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.588 "name": "Existed_Raid", 00:15:58.588 "uuid": "ee14a15b-f803-48c7-9dfb-cda609dda3f1", 00:15:58.588 "strip_size_kb": 0, 00:15:58.588 "state": "online", 00:15:58.588 "raid_level": "raid1", 00:15:58.588 "superblock": true, 00:15:58.588 "num_base_bdevs": 2, 00:15:58.588 "num_base_bdevs_discovered": 2, 00:15:58.588 "num_base_bdevs_operational": 2, 00:15:58.588 "base_bdevs_list": [ 00:15:58.588 { 00:15:58.588 "name": "BaseBdev1", 00:15:58.588 "uuid": "19e98848-0644-4901-86d8-556b816bafeb", 00:15:58.588 "is_configured": true, 00:15:58.588 "data_offset": 256, 00:15:58.588 "data_size": 7936 00:15:58.588 }, 00:15:58.588 { 00:15:58.588 "name": "BaseBdev2", 00:15:58.588 "uuid": "2b0d417a-07fa-4c8b-8f99-0d43a0f559c7", 00:15:58.588 "is_configured": true, 00:15:58.588 "data_offset": 256, 00:15:58.588 "data_size": 7936 00:15:58.588 } 00:15:58.588 ] 00:15:58.588 }' 00:15:58.588 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.588 12:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.848 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:58.848 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:58.848 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:58.848 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:58.848 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:58.848 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:58.848 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:58.848 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:58.848 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.848 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.848 [2024-11-19 12:36:04.035848] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:58.848 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.848 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:58.848 "name": "Existed_Raid", 00:15:58.848 "aliases": [ 00:15:58.848 "ee14a15b-f803-48c7-9dfb-cda609dda3f1" 00:15:58.848 ], 00:15:58.848 "product_name": "Raid Volume", 00:15:58.848 "block_size": 4096, 00:15:58.848 "num_blocks": 7936, 00:15:58.848 "uuid": "ee14a15b-f803-48c7-9dfb-cda609dda3f1", 00:15:58.848 "assigned_rate_limits": { 00:15:58.848 "rw_ios_per_sec": 0, 00:15:58.848 "rw_mbytes_per_sec": 0, 00:15:58.848 "r_mbytes_per_sec": 0, 00:15:58.848 "w_mbytes_per_sec": 0 00:15:58.848 }, 00:15:58.848 "claimed": false, 00:15:58.848 "zoned": false, 00:15:58.848 "supported_io_types": { 00:15:58.848 "read": true, 00:15:58.848 "write": true, 00:15:58.848 "unmap": false, 00:15:58.848 "flush": false, 00:15:58.848 "reset": true, 00:15:58.848 "nvme_admin": false, 00:15:58.848 "nvme_io": false, 00:15:58.848 "nvme_io_md": false, 00:15:58.848 "write_zeroes": true, 00:15:58.848 "zcopy": false, 00:15:58.848 "get_zone_info": false, 00:15:58.848 "zone_management": false, 00:15:58.848 "zone_append": false, 00:15:58.848 "compare": false, 00:15:58.848 "compare_and_write": false, 00:15:58.848 "abort": false, 00:15:58.848 "seek_hole": false, 00:15:58.848 "seek_data": false, 00:15:58.848 "copy": false, 00:15:58.848 "nvme_iov_md": false 00:15:58.848 }, 00:15:58.848 "memory_domains": [ 00:15:58.848 { 00:15:58.848 "dma_device_id": "system", 00:15:58.848 "dma_device_type": 1 00:15:58.848 }, 00:15:58.848 { 00:15:58.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.848 "dma_device_type": 2 00:15:58.848 }, 00:15:58.848 { 00:15:58.848 "dma_device_id": "system", 00:15:58.848 "dma_device_type": 1 00:15:58.848 }, 00:15:58.848 { 00:15:58.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.848 "dma_device_type": 2 00:15:58.848 } 00:15:58.848 ], 00:15:58.848 "driver_specific": { 00:15:58.848 "raid": { 00:15:58.848 "uuid": "ee14a15b-f803-48c7-9dfb-cda609dda3f1", 00:15:58.848 "strip_size_kb": 0, 00:15:58.848 "state": "online", 00:15:58.848 "raid_level": "raid1", 00:15:58.848 "superblock": true, 00:15:58.848 "num_base_bdevs": 2, 00:15:58.848 "num_base_bdevs_discovered": 2, 00:15:58.848 "num_base_bdevs_operational": 2, 00:15:58.848 "base_bdevs_list": [ 00:15:58.848 { 00:15:58.848 "name": "BaseBdev1", 00:15:58.848 "uuid": "19e98848-0644-4901-86d8-556b816bafeb", 00:15:58.848 "is_configured": true, 00:15:58.848 "data_offset": 256, 00:15:58.848 "data_size": 7936 00:15:58.848 }, 00:15:58.848 { 00:15:58.848 "name": "BaseBdev2", 00:15:58.848 "uuid": "2b0d417a-07fa-4c8b-8f99-0d43a0f559c7", 00:15:58.848 "is_configured": true, 00:15:58.848 "data_offset": 256, 00:15:58.848 "data_size": 7936 00:15:58.848 } 00:15:58.848 ] 00:15:58.848 } 00:15:58.848 } 00:15:58.848 }' 00:15:58.848 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:59.108 BaseBdev2' 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.108 [2024-11-19 12:36:04.283188] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.108 "name": "Existed_Raid", 00:15:59.108 "uuid": "ee14a15b-f803-48c7-9dfb-cda609dda3f1", 00:15:59.108 "strip_size_kb": 0, 00:15:59.108 "state": "online", 00:15:59.108 "raid_level": "raid1", 00:15:59.108 "superblock": true, 00:15:59.108 "num_base_bdevs": 2, 00:15:59.108 "num_base_bdevs_discovered": 1, 00:15:59.108 "num_base_bdevs_operational": 1, 00:15:59.108 "base_bdevs_list": [ 00:15:59.108 { 00:15:59.108 "name": null, 00:15:59.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.108 "is_configured": false, 00:15:59.108 "data_offset": 0, 00:15:59.108 "data_size": 7936 00:15:59.108 }, 00:15:59.108 { 00:15:59.108 "name": "BaseBdev2", 00:15:59.108 "uuid": "2b0d417a-07fa-4c8b-8f99-0d43a0f559c7", 00:15:59.108 "is_configured": true, 00:15:59.108 "data_offset": 256, 00:15:59.108 "data_size": 7936 00:15:59.108 } 00:15:59.108 ] 00:15:59.108 }' 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.108 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.678 [2024-11-19 12:36:04.757893] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:59.678 [2024-11-19 12:36:04.758019] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:59.678 [2024-11-19 12:36:04.769515] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:59.678 [2024-11-19 12:36:04.769568] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:59.678 [2024-11-19 12:36:04.769580] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 96526 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 96526 ']' 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 96526 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96526 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:59.678 killing process with pid 96526 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96526' 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 96526 00:15:59.678 [2024-11-19 12:36:04.855022] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:59.678 12:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 96526 00:15:59.678 [2024-11-19 12:36:04.856103] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:59.938 12:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:15:59.938 00:15:59.938 real 0m3.967s 00:15:59.938 user 0m6.160s 00:15:59.938 sys 0m0.922s 00:15:59.938 12:36:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:59.938 12:36:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.938 ************************************ 00:15:59.938 END TEST raid_state_function_test_sb_4k 00:15:59.938 ************************************ 00:15:59.938 12:36:05 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:15:59.938 12:36:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:59.938 12:36:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:59.938 12:36:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:59.938 ************************************ 00:15:59.938 START TEST raid_superblock_test_4k 00:15:59.938 ************************************ 00:15:59.938 12:36:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:15:59.938 12:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:59.938 12:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:59.938 12:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:59.938 12:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:59.938 12:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:59.938 12:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:59.938 12:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:59.938 12:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:59.938 12:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:59.938 12:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:59.938 12:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:59.938 12:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:59.938 12:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:59.938 12:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:59.938 12:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:59.938 12:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=96759 00:15:59.938 12:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:59.938 12:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 96759 00:15:59.938 12:36:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 96759 ']' 00:15:59.938 12:36:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.938 12:36:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:59.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.938 12:36:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.938 12:36:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:59.938 12:36:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.198 [2024-11-19 12:36:05.269340] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:00.198 [2024-11-19 12:36:05.269467] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96759 ] 00:16:00.198 [2024-11-19 12:36:05.429152] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.457 [2024-11-19 12:36:05.482379] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.457 [2024-11-19 12:36:05.524394] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:00.457 [2024-11-19 12:36:05.524436] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.027 malloc1 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.027 [2024-11-19 12:36:06.158535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:01.027 [2024-11-19 12:36:06.158645] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.027 [2024-11-19 12:36:06.158674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:01.027 [2024-11-19 12:36:06.158690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.027 [2024-11-19 12:36:06.161010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.027 [2024-11-19 12:36:06.161058] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:01.027 pt1 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.027 malloc2 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.027 [2024-11-19 12:36:06.194313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:01.027 [2024-11-19 12:36:06.194393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.027 [2024-11-19 12:36:06.194412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:01.027 [2024-11-19 12:36:06.194422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.027 [2024-11-19 12:36:06.196713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.027 [2024-11-19 12:36:06.196777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:01.027 pt2 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.027 [2024-11-19 12:36:06.206387] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:01.027 [2024-11-19 12:36:06.208481] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:01.027 [2024-11-19 12:36:06.208650] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:01.027 [2024-11-19 12:36:06.208672] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:01.027 [2024-11-19 12:36:06.208967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:01.027 [2024-11-19 12:36:06.209128] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:01.027 [2024-11-19 12:36:06.209144] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:01.027 [2024-11-19 12:36:06.209321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.027 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.028 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.028 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.028 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.028 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.028 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.028 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.028 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.028 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.028 "name": "raid_bdev1", 00:16:01.028 "uuid": "e53222ea-92fd-4167-9d2d-679d0185cfd7", 00:16:01.028 "strip_size_kb": 0, 00:16:01.028 "state": "online", 00:16:01.028 "raid_level": "raid1", 00:16:01.028 "superblock": true, 00:16:01.028 "num_base_bdevs": 2, 00:16:01.028 "num_base_bdevs_discovered": 2, 00:16:01.028 "num_base_bdevs_operational": 2, 00:16:01.028 "base_bdevs_list": [ 00:16:01.028 { 00:16:01.028 "name": "pt1", 00:16:01.028 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:01.028 "is_configured": true, 00:16:01.028 "data_offset": 256, 00:16:01.028 "data_size": 7936 00:16:01.028 }, 00:16:01.028 { 00:16:01.028 "name": "pt2", 00:16:01.028 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:01.028 "is_configured": true, 00:16:01.028 "data_offset": 256, 00:16:01.028 "data_size": 7936 00:16:01.028 } 00:16:01.028 ] 00:16:01.028 }' 00:16:01.028 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.028 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.597 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:01.597 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:01.597 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:01.597 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:01.597 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:01.597 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:01.597 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:01.597 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.597 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.597 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:01.597 [2024-11-19 12:36:06.677909] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:01.597 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.597 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:01.597 "name": "raid_bdev1", 00:16:01.597 "aliases": [ 00:16:01.597 "e53222ea-92fd-4167-9d2d-679d0185cfd7" 00:16:01.597 ], 00:16:01.597 "product_name": "Raid Volume", 00:16:01.597 "block_size": 4096, 00:16:01.597 "num_blocks": 7936, 00:16:01.597 "uuid": "e53222ea-92fd-4167-9d2d-679d0185cfd7", 00:16:01.597 "assigned_rate_limits": { 00:16:01.597 "rw_ios_per_sec": 0, 00:16:01.597 "rw_mbytes_per_sec": 0, 00:16:01.597 "r_mbytes_per_sec": 0, 00:16:01.597 "w_mbytes_per_sec": 0 00:16:01.597 }, 00:16:01.597 "claimed": false, 00:16:01.597 "zoned": false, 00:16:01.597 "supported_io_types": { 00:16:01.597 "read": true, 00:16:01.597 "write": true, 00:16:01.597 "unmap": false, 00:16:01.597 "flush": false, 00:16:01.597 "reset": true, 00:16:01.597 "nvme_admin": false, 00:16:01.597 "nvme_io": false, 00:16:01.597 "nvme_io_md": false, 00:16:01.597 "write_zeroes": true, 00:16:01.597 "zcopy": false, 00:16:01.597 "get_zone_info": false, 00:16:01.597 "zone_management": false, 00:16:01.597 "zone_append": false, 00:16:01.597 "compare": false, 00:16:01.597 "compare_and_write": false, 00:16:01.597 "abort": false, 00:16:01.597 "seek_hole": false, 00:16:01.597 "seek_data": false, 00:16:01.597 "copy": false, 00:16:01.597 "nvme_iov_md": false 00:16:01.597 }, 00:16:01.597 "memory_domains": [ 00:16:01.597 { 00:16:01.597 "dma_device_id": "system", 00:16:01.597 "dma_device_type": 1 00:16:01.597 }, 00:16:01.597 { 00:16:01.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.597 "dma_device_type": 2 00:16:01.597 }, 00:16:01.597 { 00:16:01.597 "dma_device_id": "system", 00:16:01.597 "dma_device_type": 1 00:16:01.597 }, 00:16:01.597 { 00:16:01.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.597 "dma_device_type": 2 00:16:01.597 } 00:16:01.597 ], 00:16:01.597 "driver_specific": { 00:16:01.597 "raid": { 00:16:01.597 "uuid": "e53222ea-92fd-4167-9d2d-679d0185cfd7", 00:16:01.597 "strip_size_kb": 0, 00:16:01.597 "state": "online", 00:16:01.597 "raid_level": "raid1", 00:16:01.597 "superblock": true, 00:16:01.597 "num_base_bdevs": 2, 00:16:01.597 "num_base_bdevs_discovered": 2, 00:16:01.597 "num_base_bdevs_operational": 2, 00:16:01.597 "base_bdevs_list": [ 00:16:01.597 { 00:16:01.597 "name": "pt1", 00:16:01.597 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:01.597 "is_configured": true, 00:16:01.597 "data_offset": 256, 00:16:01.597 "data_size": 7936 00:16:01.597 }, 00:16:01.597 { 00:16:01.597 "name": "pt2", 00:16:01.597 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:01.597 "is_configured": true, 00:16:01.597 "data_offset": 256, 00:16:01.597 "data_size": 7936 00:16:01.597 } 00:16:01.597 ] 00:16:01.597 } 00:16:01.597 } 00:16:01.597 }' 00:16:01.597 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:01.597 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:01.597 pt2' 00:16:01.597 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:01.597 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:01.597 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:01.597 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:01.597 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.597 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.597 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:01.597 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.597 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:01.597 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:01.597 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:01.597 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:01.597 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:01.597 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.597 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.857 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.857 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:01.857 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:01.857 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:01.857 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.857 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.857 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:01.857 [2024-11-19 12:36:06.905456] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:01.857 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.857 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e53222ea-92fd-4167-9d2d-679d0185cfd7 00:16:01.858 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z e53222ea-92fd-4167-9d2d-679d0185cfd7 ']' 00:16:01.858 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:01.858 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.858 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.858 [2024-11-19 12:36:06.937127] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.858 [2024-11-19 12:36:06.937170] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:01.858 [2024-11-19 12:36:06.937262] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:01.858 [2024-11-19 12:36:06.937345] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:01.858 [2024-11-19 12:36:06.937358] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:01.858 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.858 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.858 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.858 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.858 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:01.858 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.858 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:01.858 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:01.858 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:01.858 12:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:01.858 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.858 12:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.858 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.858 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:01.858 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:01.858 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.858 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.858 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.858 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:01.858 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.858 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:01.858 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.858 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.858 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:01.858 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:01.858 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:16:01.858 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:01.858 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:01.858 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:01.858 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:01.858 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:01.858 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:01.858 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.858 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.858 [2024-11-19 12:36:07.076964] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:01.858 [2024-11-19 12:36:07.078940] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:01.858 [2024-11-19 12:36:07.079027] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:01.858 [2024-11-19 12:36:07.079083] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:01.858 [2024-11-19 12:36:07.079100] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.858 [2024-11-19 12:36:07.079110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:16:01.858 request: 00:16:01.858 { 00:16:01.858 "name": "raid_bdev1", 00:16:01.858 "raid_level": "raid1", 00:16:01.858 "base_bdevs": [ 00:16:01.858 "malloc1", 00:16:01.858 "malloc2" 00:16:01.858 ], 00:16:01.858 "superblock": false, 00:16:01.858 "method": "bdev_raid_create", 00:16:01.858 "req_id": 1 00:16:01.858 } 00:16:01.858 Got JSON-RPC error response 00:16:01.858 response: 00:16:01.858 { 00:16:01.858 "code": -17, 00:16:01.858 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:01.858 } 00:16:01.858 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:01.858 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:16:01.858 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:01.858 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:01.858 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:01.858 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.858 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:01.858 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.858 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.858 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.118 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:02.118 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:02.118 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:02.118 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.118 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.118 [2024-11-19 12:36:07.144824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:02.118 [2024-11-19 12:36:07.144926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.118 [2024-11-19 12:36:07.144948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:02.118 [2024-11-19 12:36:07.144958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.118 [2024-11-19 12:36:07.147232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.118 [2024-11-19 12:36:07.147278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:02.118 [2024-11-19 12:36:07.147372] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:02.118 [2024-11-19 12:36:07.147432] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:02.118 pt1 00:16:02.118 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.118 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:02.118 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.118 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.118 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:02.118 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:02.118 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:02.118 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.118 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.118 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.118 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.118 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.118 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.118 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.118 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.118 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.118 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.118 "name": "raid_bdev1", 00:16:02.118 "uuid": "e53222ea-92fd-4167-9d2d-679d0185cfd7", 00:16:02.118 "strip_size_kb": 0, 00:16:02.118 "state": "configuring", 00:16:02.118 "raid_level": "raid1", 00:16:02.118 "superblock": true, 00:16:02.118 "num_base_bdevs": 2, 00:16:02.118 "num_base_bdevs_discovered": 1, 00:16:02.118 "num_base_bdevs_operational": 2, 00:16:02.118 "base_bdevs_list": [ 00:16:02.118 { 00:16:02.118 "name": "pt1", 00:16:02.118 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:02.118 "is_configured": true, 00:16:02.118 "data_offset": 256, 00:16:02.118 "data_size": 7936 00:16:02.118 }, 00:16:02.118 { 00:16:02.118 "name": null, 00:16:02.118 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:02.118 "is_configured": false, 00:16:02.118 "data_offset": 256, 00:16:02.118 "data_size": 7936 00:16:02.118 } 00:16:02.118 ] 00:16:02.118 }' 00:16:02.118 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.118 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.380 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:02.380 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:02.380 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:02.380 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:02.380 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.380 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.380 [2024-11-19 12:36:07.623992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:02.380 [2024-11-19 12:36:07.624083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.380 [2024-11-19 12:36:07.624112] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:02.380 [2024-11-19 12:36:07.624123] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.380 [2024-11-19 12:36:07.624617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.380 [2024-11-19 12:36:07.624644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:02.380 [2024-11-19 12:36:07.624731] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:02.380 [2024-11-19 12:36:07.624775] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:02.380 [2024-11-19 12:36:07.624881] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:02.380 [2024-11-19 12:36:07.624897] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:02.380 [2024-11-19 12:36:07.625183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:02.380 [2024-11-19 12:36:07.625322] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:02.380 [2024-11-19 12:36:07.625346] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:16:02.380 [2024-11-19 12:36:07.625461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.380 pt2 00:16:02.380 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.380 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:02.380 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:02.380 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:02.380 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.380 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.380 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:02.380 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:02.380 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:02.380 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.380 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.380 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.380 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.640 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.640 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.640 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.640 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.640 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.640 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.640 "name": "raid_bdev1", 00:16:02.640 "uuid": "e53222ea-92fd-4167-9d2d-679d0185cfd7", 00:16:02.640 "strip_size_kb": 0, 00:16:02.640 "state": "online", 00:16:02.640 "raid_level": "raid1", 00:16:02.640 "superblock": true, 00:16:02.640 "num_base_bdevs": 2, 00:16:02.640 "num_base_bdevs_discovered": 2, 00:16:02.640 "num_base_bdevs_operational": 2, 00:16:02.640 "base_bdevs_list": [ 00:16:02.640 { 00:16:02.640 "name": "pt1", 00:16:02.640 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:02.640 "is_configured": true, 00:16:02.640 "data_offset": 256, 00:16:02.640 "data_size": 7936 00:16:02.640 }, 00:16:02.640 { 00:16:02.640 "name": "pt2", 00:16:02.640 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:02.640 "is_configured": true, 00:16:02.640 "data_offset": 256, 00:16:02.640 "data_size": 7936 00:16:02.640 } 00:16:02.640 ] 00:16:02.640 }' 00:16:02.640 12:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.640 12:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.899 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:02.899 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:02.899 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:02.899 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:02.899 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:02.899 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:02.899 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:02.899 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.899 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:02.899 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.899 [2024-11-19 12:36:08.023564] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:02.899 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.899 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:02.899 "name": "raid_bdev1", 00:16:02.899 "aliases": [ 00:16:02.899 "e53222ea-92fd-4167-9d2d-679d0185cfd7" 00:16:02.899 ], 00:16:02.899 "product_name": "Raid Volume", 00:16:02.899 "block_size": 4096, 00:16:02.899 "num_blocks": 7936, 00:16:02.899 "uuid": "e53222ea-92fd-4167-9d2d-679d0185cfd7", 00:16:02.899 "assigned_rate_limits": { 00:16:02.899 "rw_ios_per_sec": 0, 00:16:02.899 "rw_mbytes_per_sec": 0, 00:16:02.899 "r_mbytes_per_sec": 0, 00:16:02.899 "w_mbytes_per_sec": 0 00:16:02.899 }, 00:16:02.899 "claimed": false, 00:16:02.899 "zoned": false, 00:16:02.899 "supported_io_types": { 00:16:02.899 "read": true, 00:16:02.899 "write": true, 00:16:02.899 "unmap": false, 00:16:02.899 "flush": false, 00:16:02.899 "reset": true, 00:16:02.899 "nvme_admin": false, 00:16:02.899 "nvme_io": false, 00:16:02.899 "nvme_io_md": false, 00:16:02.899 "write_zeroes": true, 00:16:02.899 "zcopy": false, 00:16:02.899 "get_zone_info": false, 00:16:02.899 "zone_management": false, 00:16:02.899 "zone_append": false, 00:16:02.899 "compare": false, 00:16:02.899 "compare_and_write": false, 00:16:02.899 "abort": false, 00:16:02.899 "seek_hole": false, 00:16:02.899 "seek_data": false, 00:16:02.899 "copy": false, 00:16:02.899 "nvme_iov_md": false 00:16:02.899 }, 00:16:02.899 "memory_domains": [ 00:16:02.899 { 00:16:02.899 "dma_device_id": "system", 00:16:02.899 "dma_device_type": 1 00:16:02.899 }, 00:16:02.899 { 00:16:02.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.899 "dma_device_type": 2 00:16:02.899 }, 00:16:02.899 { 00:16:02.899 "dma_device_id": "system", 00:16:02.899 "dma_device_type": 1 00:16:02.899 }, 00:16:02.899 { 00:16:02.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.899 "dma_device_type": 2 00:16:02.899 } 00:16:02.899 ], 00:16:02.899 "driver_specific": { 00:16:02.899 "raid": { 00:16:02.899 "uuid": "e53222ea-92fd-4167-9d2d-679d0185cfd7", 00:16:02.899 "strip_size_kb": 0, 00:16:02.899 "state": "online", 00:16:02.899 "raid_level": "raid1", 00:16:02.899 "superblock": true, 00:16:02.899 "num_base_bdevs": 2, 00:16:02.899 "num_base_bdevs_discovered": 2, 00:16:02.899 "num_base_bdevs_operational": 2, 00:16:02.900 "base_bdevs_list": [ 00:16:02.900 { 00:16:02.900 "name": "pt1", 00:16:02.900 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:02.900 "is_configured": true, 00:16:02.900 "data_offset": 256, 00:16:02.900 "data_size": 7936 00:16:02.900 }, 00:16:02.900 { 00:16:02.900 "name": "pt2", 00:16:02.900 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:02.900 "is_configured": true, 00:16:02.900 "data_offset": 256, 00:16:02.900 "data_size": 7936 00:16:02.900 } 00:16:02.900 ] 00:16:02.900 } 00:16:02.900 } 00:16:02.900 }' 00:16:02.900 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:02.900 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:02.900 pt2' 00:16:02.900 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:02.900 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:02.900 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:02.900 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:02.900 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:02.900 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.900 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.159 [2024-11-19 12:36:08.243205] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' e53222ea-92fd-4167-9d2d-679d0185cfd7 '!=' e53222ea-92fd-4167-9d2d-679d0185cfd7 ']' 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.159 [2024-11-19 12:36:08.270976] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.159 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.159 "name": "raid_bdev1", 00:16:03.159 "uuid": "e53222ea-92fd-4167-9d2d-679d0185cfd7", 00:16:03.159 "strip_size_kb": 0, 00:16:03.159 "state": "online", 00:16:03.159 "raid_level": "raid1", 00:16:03.159 "superblock": true, 00:16:03.159 "num_base_bdevs": 2, 00:16:03.159 "num_base_bdevs_discovered": 1, 00:16:03.159 "num_base_bdevs_operational": 1, 00:16:03.159 "base_bdevs_list": [ 00:16:03.159 { 00:16:03.159 "name": null, 00:16:03.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.159 "is_configured": false, 00:16:03.159 "data_offset": 0, 00:16:03.159 "data_size": 7936 00:16:03.159 }, 00:16:03.159 { 00:16:03.159 "name": "pt2", 00:16:03.159 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:03.159 "is_configured": true, 00:16:03.159 "data_offset": 256, 00:16:03.159 "data_size": 7936 00:16:03.159 } 00:16:03.159 ] 00:16:03.159 }' 00:16:03.160 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.160 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.728 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:03.728 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.728 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.728 [2024-11-19 12:36:08.718856] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:03.728 [2024-11-19 12:36:08.718903] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:03.728 [2024-11-19 12:36:08.719018] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:03.728 [2024-11-19 12:36:08.719075] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:03.728 [2024-11-19 12:36:08.719090] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:16:03.728 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.728 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:03.728 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.728 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.728 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.728 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.728 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:03.728 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:03.728 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:03.728 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:03.728 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:03.728 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.728 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.729 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.729 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:03.729 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:03.729 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:03.729 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:03.729 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:16:03.729 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:03.729 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.729 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.729 [2024-11-19 12:36:08.794732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:03.729 [2024-11-19 12:36:08.794815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.729 [2024-11-19 12:36:08.794836] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:03.729 [2024-11-19 12:36:08.794846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.729 [2024-11-19 12:36:08.797092] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.729 [2024-11-19 12:36:08.797129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:03.729 [2024-11-19 12:36:08.797219] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:03.729 [2024-11-19 12:36:08.797253] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:03.729 [2024-11-19 12:36:08.797335] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:16:03.729 [2024-11-19 12:36:08.797344] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:03.729 [2024-11-19 12:36:08.797582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:03.729 [2024-11-19 12:36:08.797708] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:16:03.729 [2024-11-19 12:36:08.797727] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:16:03.729 [2024-11-19 12:36:08.797853] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.729 pt2 00:16:03.729 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.729 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:03.729 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.729 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.729 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:03.729 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:03.729 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:03.729 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.729 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.729 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.729 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.729 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.729 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.729 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.729 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.729 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.729 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.729 "name": "raid_bdev1", 00:16:03.729 "uuid": "e53222ea-92fd-4167-9d2d-679d0185cfd7", 00:16:03.729 "strip_size_kb": 0, 00:16:03.729 "state": "online", 00:16:03.729 "raid_level": "raid1", 00:16:03.729 "superblock": true, 00:16:03.729 "num_base_bdevs": 2, 00:16:03.729 "num_base_bdevs_discovered": 1, 00:16:03.729 "num_base_bdevs_operational": 1, 00:16:03.729 "base_bdevs_list": [ 00:16:03.729 { 00:16:03.729 "name": null, 00:16:03.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.729 "is_configured": false, 00:16:03.729 "data_offset": 256, 00:16:03.729 "data_size": 7936 00:16:03.729 }, 00:16:03.729 { 00:16:03.729 "name": "pt2", 00:16:03.729 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:03.729 "is_configured": true, 00:16:03.729 "data_offset": 256, 00:16:03.729 "data_size": 7936 00:16:03.729 } 00:16:03.729 ] 00:16:03.729 }' 00:16:03.729 12:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.729 12:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.988 12:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:03.988 12:36:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.988 12:36:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.247 [2024-11-19 12:36:09.250004] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:04.247 [2024-11-19 12:36:09.250053] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:04.247 [2024-11-19 12:36:09.250152] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:04.247 [2024-11-19 12:36:09.250208] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:04.247 [2024-11-19 12:36:09.250226] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:16:04.247 12:36:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.247 12:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.247 12:36:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.247 12:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:04.247 12:36:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.247 12:36:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.247 12:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:04.247 12:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:04.247 12:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:04.247 12:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:04.247 12:36:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.247 12:36:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.247 [2024-11-19 12:36:09.301902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:04.247 [2024-11-19 12:36:09.301996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.247 [2024-11-19 12:36:09.302022] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:04.247 [2024-11-19 12:36:09.302041] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.247 [2024-11-19 12:36:09.304348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.247 [2024-11-19 12:36:09.304399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:04.247 [2024-11-19 12:36:09.304491] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:04.247 [2024-11-19 12:36:09.304552] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:04.247 [2024-11-19 12:36:09.304669] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:04.247 [2024-11-19 12:36:09.304693] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:04.247 [2024-11-19 12:36:09.304719] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:16:04.247 [2024-11-19 12:36:09.304787] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:04.247 [2024-11-19 12:36:09.304866] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:04.247 [2024-11-19 12:36:09.304885] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:04.247 [2024-11-19 12:36:09.305132] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:04.247 [2024-11-19 12:36:09.305253] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:04.247 [2024-11-19 12:36:09.305273] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:04.247 [2024-11-19 12:36:09.305391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.247 pt1 00:16:04.247 12:36:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.247 12:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:04.247 12:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:04.247 12:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.247 12:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.247 12:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.247 12:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.247 12:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:04.247 12:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.247 12:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.247 12:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.247 12:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.247 12:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.247 12:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.247 12:36:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.247 12:36:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.247 12:36:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.247 12:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.247 "name": "raid_bdev1", 00:16:04.247 "uuid": "e53222ea-92fd-4167-9d2d-679d0185cfd7", 00:16:04.247 "strip_size_kb": 0, 00:16:04.247 "state": "online", 00:16:04.247 "raid_level": "raid1", 00:16:04.247 "superblock": true, 00:16:04.247 "num_base_bdevs": 2, 00:16:04.247 "num_base_bdevs_discovered": 1, 00:16:04.247 "num_base_bdevs_operational": 1, 00:16:04.247 "base_bdevs_list": [ 00:16:04.247 { 00:16:04.247 "name": null, 00:16:04.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.247 "is_configured": false, 00:16:04.247 "data_offset": 256, 00:16:04.247 "data_size": 7936 00:16:04.247 }, 00:16:04.247 { 00:16:04.247 "name": "pt2", 00:16:04.247 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:04.247 "is_configured": true, 00:16:04.247 "data_offset": 256, 00:16:04.247 "data_size": 7936 00:16:04.247 } 00:16:04.247 ] 00:16:04.247 }' 00:16:04.247 12:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.247 12:36:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.506 12:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:04.506 12:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:04.506 12:36:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.506 12:36:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.765 12:36:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.765 12:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:04.765 12:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:04.765 12:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:04.765 12:36:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.765 12:36:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.765 [2024-11-19 12:36:09.793365] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:04.765 12:36:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.765 12:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' e53222ea-92fd-4167-9d2d-679d0185cfd7 '!=' e53222ea-92fd-4167-9d2d-679d0185cfd7 ']' 00:16:04.765 12:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 96759 00:16:04.765 12:36:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 96759 ']' 00:16:04.765 12:36:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 96759 00:16:04.765 12:36:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:16:04.765 12:36:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:04.765 12:36:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96759 00:16:04.765 12:36:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:04.765 12:36:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:04.765 killing process with pid 96759 00:16:04.765 12:36:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96759' 00:16:04.765 12:36:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 96759 00:16:04.765 [2024-11-19 12:36:09.866165] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:04.765 [2024-11-19 12:36:09.866281] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:04.765 [2024-11-19 12:36:09.866335] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:04.765 [2024-11-19 12:36:09.866345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:04.765 12:36:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 96759 00:16:04.765 [2024-11-19 12:36:09.889664] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:05.024 12:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:16:05.024 00:16:05.024 real 0m4.950s 00:16:05.024 user 0m7.962s 00:16:05.024 sys 0m1.186s 00:16:05.024 12:36:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:05.024 12:36:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.024 ************************************ 00:16:05.024 END TEST raid_superblock_test_4k 00:16:05.024 ************************************ 00:16:05.024 12:36:10 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:16:05.024 12:36:10 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:16:05.024 12:36:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:05.024 12:36:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:05.024 12:36:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:05.024 ************************************ 00:16:05.024 START TEST raid_rebuild_test_sb_4k 00:16:05.024 ************************************ 00:16:05.024 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:16:05.024 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:05.024 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:05.024 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:05.024 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:05.024 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:05.024 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:05.024 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:05.024 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:05.024 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:05.024 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:05.024 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:05.024 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:05.024 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:05.024 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:05.024 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:05.024 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:05.024 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:05.024 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:05.024 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:05.024 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:05.024 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:05.024 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:05.024 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:05.024 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:05.024 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=97076 00:16:05.024 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:05.025 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 97076 00:16:05.025 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 97076 ']' 00:16:05.025 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.025 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:05.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.025 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.025 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:05.025 12:36:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.284 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:05.284 Zero copy mechanism will not be used. 00:16:05.284 [2024-11-19 12:36:10.312840] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:05.284 [2024-11-19 12:36:10.313008] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97076 ] 00:16:05.284 [2024-11-19 12:36:10.463395] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.284 [2024-11-19 12:36:10.516201] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.543 [2024-11-19 12:36:10.558321] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:05.543 [2024-11-19 12:36:10.558362] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:06.111 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:06.111 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:16:06.111 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:06.111 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:16:06.111 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.111 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.111 BaseBdev1_malloc 00:16:06.111 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.111 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:06.111 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.111 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.111 [2024-11-19 12:36:11.184482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:06.111 [2024-11-19 12:36:11.184567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.111 [2024-11-19 12:36:11.184595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:06.111 [2024-11-19 12:36:11.184611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.111 [2024-11-19 12:36:11.186831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.111 [2024-11-19 12:36:11.186868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:06.111 BaseBdev1 00:16:06.111 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.111 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:06.111 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:16:06.111 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.111 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.111 BaseBdev2_malloc 00:16:06.111 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.111 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:06.111 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.111 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.111 [2024-11-19 12:36:11.222563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:06.111 [2024-11-19 12:36:11.222649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.111 [2024-11-19 12:36:11.222676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:06.111 [2024-11-19 12:36:11.222687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.111 [2024-11-19 12:36:11.225220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.112 [2024-11-19 12:36:11.225266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:06.112 BaseBdev2 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.112 spare_malloc 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.112 spare_delay 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.112 [2024-11-19 12:36:11.263272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:06.112 [2024-11-19 12:36:11.263346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.112 [2024-11-19 12:36:11.263370] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:06.112 [2024-11-19 12:36:11.263379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.112 [2024-11-19 12:36:11.265578] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.112 [2024-11-19 12:36:11.265617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:06.112 spare 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.112 [2024-11-19 12:36:11.275314] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:06.112 [2024-11-19 12:36:11.277342] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:06.112 [2024-11-19 12:36:11.277520] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:06.112 [2024-11-19 12:36:11.277538] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:06.112 [2024-11-19 12:36:11.277849] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:06.112 [2024-11-19 12:36:11.278021] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:06.112 [2024-11-19 12:36:11.278042] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:06.112 [2024-11-19 12:36:11.278207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.112 "name": "raid_bdev1", 00:16:06.112 "uuid": "46028d26-9ad8-4b15-a516-a9c660902f60", 00:16:06.112 "strip_size_kb": 0, 00:16:06.112 "state": "online", 00:16:06.112 "raid_level": "raid1", 00:16:06.112 "superblock": true, 00:16:06.112 "num_base_bdevs": 2, 00:16:06.112 "num_base_bdevs_discovered": 2, 00:16:06.112 "num_base_bdevs_operational": 2, 00:16:06.112 "base_bdevs_list": [ 00:16:06.112 { 00:16:06.112 "name": "BaseBdev1", 00:16:06.112 "uuid": "56211819-be75-5b4a-abe2-337958bd1e18", 00:16:06.112 "is_configured": true, 00:16:06.112 "data_offset": 256, 00:16:06.112 "data_size": 7936 00:16:06.112 }, 00:16:06.112 { 00:16:06.112 "name": "BaseBdev2", 00:16:06.112 "uuid": "ba07847e-b223-5e32-8d76-6055db686734", 00:16:06.112 "is_configured": true, 00:16:06.112 "data_offset": 256, 00:16:06.112 "data_size": 7936 00:16:06.112 } 00:16:06.112 ] 00:16:06.112 }' 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.112 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.681 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:06.681 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:06.681 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.681 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.681 [2024-11-19 12:36:11.730906] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:06.681 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.681 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:06.681 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.681 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.681 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.681 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:06.681 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.681 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:06.681 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:06.681 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:06.681 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:06.681 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:06.681 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:06.681 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:06.681 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:06.681 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:06.681 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:06.681 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:16:06.681 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:06.681 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:06.681 12:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:06.941 [2024-11-19 12:36:12.018147] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:06.941 /dev/nbd0 00:16:06.941 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:06.941 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:06.941 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:06.941 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:16:06.941 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:06.941 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:06.941 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:06.941 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:16:06.941 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:06.941 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:06.941 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:06.941 1+0 records in 00:16:06.941 1+0 records out 00:16:06.941 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403183 s, 10.2 MB/s 00:16:06.941 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.941 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:16:06.941 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.941 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:06.941 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:16:06.941 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:06.941 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:06.941 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:06.941 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:06.941 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:16:07.509 7936+0 records in 00:16:07.509 7936+0 records out 00:16:07.509 32505856 bytes (33 MB, 31 MiB) copied, 0.577779 s, 56.3 MB/s 00:16:07.509 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:07.509 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:07.509 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:07.509 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:07.509 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:07.509 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:07.509 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:07.768 [2024-11-19 12:36:12.887837] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.768 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:07.768 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:07.768 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:07.768 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:07.768 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:07.768 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:07.768 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:07.768 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:07.768 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:07.768 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.768 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.768 [2024-11-19 12:36:12.909418] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:07.768 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.768 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:07.768 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.768 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.769 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.769 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.769 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:07.769 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.769 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.769 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.769 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.769 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.769 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.769 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.769 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.769 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.769 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.769 "name": "raid_bdev1", 00:16:07.769 "uuid": "46028d26-9ad8-4b15-a516-a9c660902f60", 00:16:07.769 "strip_size_kb": 0, 00:16:07.769 "state": "online", 00:16:07.769 "raid_level": "raid1", 00:16:07.769 "superblock": true, 00:16:07.769 "num_base_bdevs": 2, 00:16:07.769 "num_base_bdevs_discovered": 1, 00:16:07.769 "num_base_bdevs_operational": 1, 00:16:07.769 "base_bdevs_list": [ 00:16:07.769 { 00:16:07.769 "name": null, 00:16:07.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.769 "is_configured": false, 00:16:07.769 "data_offset": 0, 00:16:07.769 "data_size": 7936 00:16:07.769 }, 00:16:07.769 { 00:16:07.769 "name": "BaseBdev2", 00:16:07.769 "uuid": "ba07847e-b223-5e32-8d76-6055db686734", 00:16:07.769 "is_configured": true, 00:16:07.769 "data_offset": 256, 00:16:07.769 "data_size": 7936 00:16:07.769 } 00:16:07.769 ] 00:16:07.769 }' 00:16:07.769 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.769 12:36:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.338 12:36:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:08.338 12:36:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.338 12:36:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.338 [2024-11-19 12:36:13.340835] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:08.338 [2024-11-19 12:36:13.345054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:16:08.338 12:36:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.338 12:36:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:08.338 [2024-11-19 12:36:13.347075] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:09.276 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:09.276 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.276 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:09.276 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:09.276 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.276 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.276 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.276 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.276 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:09.276 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.276 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.276 "name": "raid_bdev1", 00:16:09.276 "uuid": "46028d26-9ad8-4b15-a516-a9c660902f60", 00:16:09.276 "strip_size_kb": 0, 00:16:09.276 "state": "online", 00:16:09.276 "raid_level": "raid1", 00:16:09.276 "superblock": true, 00:16:09.276 "num_base_bdevs": 2, 00:16:09.276 "num_base_bdevs_discovered": 2, 00:16:09.276 "num_base_bdevs_operational": 2, 00:16:09.276 "process": { 00:16:09.276 "type": "rebuild", 00:16:09.276 "target": "spare", 00:16:09.276 "progress": { 00:16:09.276 "blocks": 2560, 00:16:09.276 "percent": 32 00:16:09.276 } 00:16:09.276 }, 00:16:09.276 "base_bdevs_list": [ 00:16:09.276 { 00:16:09.276 "name": "spare", 00:16:09.276 "uuid": "1e8fa8c4-9ef4-5b83-b911-50ae41bdaadc", 00:16:09.276 "is_configured": true, 00:16:09.276 "data_offset": 256, 00:16:09.276 "data_size": 7936 00:16:09.276 }, 00:16:09.276 { 00:16:09.276 "name": "BaseBdev2", 00:16:09.276 "uuid": "ba07847e-b223-5e32-8d76-6055db686734", 00:16:09.276 "is_configured": true, 00:16:09.276 "data_offset": 256, 00:16:09.276 "data_size": 7936 00:16:09.276 } 00:16:09.276 ] 00:16:09.276 }' 00:16:09.276 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.276 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:09.276 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.276 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.276 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:09.276 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.276 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:09.276 [2024-11-19 12:36:14.516288] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:09.536 [2024-11-19 12:36:14.552814] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:09.536 [2024-11-19 12:36:14.552885] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.536 [2024-11-19 12:36:14.552905] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:09.536 [2024-11-19 12:36:14.552913] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:09.536 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.536 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:09.536 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.536 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.536 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:09.536 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:09.536 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:09.536 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.536 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.536 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.536 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.536 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.536 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.536 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.536 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:09.536 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.536 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.536 "name": "raid_bdev1", 00:16:09.536 "uuid": "46028d26-9ad8-4b15-a516-a9c660902f60", 00:16:09.536 "strip_size_kb": 0, 00:16:09.536 "state": "online", 00:16:09.536 "raid_level": "raid1", 00:16:09.536 "superblock": true, 00:16:09.536 "num_base_bdevs": 2, 00:16:09.536 "num_base_bdevs_discovered": 1, 00:16:09.536 "num_base_bdevs_operational": 1, 00:16:09.536 "base_bdevs_list": [ 00:16:09.536 { 00:16:09.536 "name": null, 00:16:09.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.536 "is_configured": false, 00:16:09.536 "data_offset": 0, 00:16:09.536 "data_size": 7936 00:16:09.536 }, 00:16:09.536 { 00:16:09.536 "name": "BaseBdev2", 00:16:09.536 "uuid": "ba07847e-b223-5e32-8d76-6055db686734", 00:16:09.536 "is_configured": true, 00:16:09.536 "data_offset": 256, 00:16:09.536 "data_size": 7936 00:16:09.536 } 00:16:09.536 ] 00:16:09.536 }' 00:16:09.537 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.537 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:09.796 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:09.796 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.796 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:09.796 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:09.796 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.796 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.796 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.796 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.796 12:36:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:09.796 12:36:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.796 12:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.796 "name": "raid_bdev1", 00:16:09.796 "uuid": "46028d26-9ad8-4b15-a516-a9c660902f60", 00:16:09.796 "strip_size_kb": 0, 00:16:09.796 "state": "online", 00:16:09.796 "raid_level": "raid1", 00:16:09.796 "superblock": true, 00:16:09.796 "num_base_bdevs": 2, 00:16:09.796 "num_base_bdevs_discovered": 1, 00:16:09.796 "num_base_bdevs_operational": 1, 00:16:09.796 "base_bdevs_list": [ 00:16:09.796 { 00:16:09.796 "name": null, 00:16:09.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.796 "is_configured": false, 00:16:09.796 "data_offset": 0, 00:16:09.796 "data_size": 7936 00:16:09.796 }, 00:16:09.796 { 00:16:09.796 "name": "BaseBdev2", 00:16:09.796 "uuid": "ba07847e-b223-5e32-8d76-6055db686734", 00:16:09.796 "is_configured": true, 00:16:09.796 "data_offset": 256, 00:16:09.796 "data_size": 7936 00:16:09.796 } 00:16:09.796 ] 00:16:09.796 }' 00:16:09.796 12:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.796 12:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:10.054 12:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.054 12:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:10.054 12:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:10.054 12:36:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.054 12:36:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:10.055 [2024-11-19 12:36:15.096640] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:10.055 [2024-11-19 12:36:15.100849] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:16:10.055 12:36:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.055 12:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:10.055 [2024-11-19 12:36:15.102728] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:10.991 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.991 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.991 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.991 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.991 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.991 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.991 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.991 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.991 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:10.991 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.991 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.991 "name": "raid_bdev1", 00:16:10.991 "uuid": "46028d26-9ad8-4b15-a516-a9c660902f60", 00:16:10.991 "strip_size_kb": 0, 00:16:10.991 "state": "online", 00:16:10.991 "raid_level": "raid1", 00:16:10.991 "superblock": true, 00:16:10.991 "num_base_bdevs": 2, 00:16:10.991 "num_base_bdevs_discovered": 2, 00:16:10.991 "num_base_bdevs_operational": 2, 00:16:10.991 "process": { 00:16:10.991 "type": "rebuild", 00:16:10.991 "target": "spare", 00:16:10.991 "progress": { 00:16:10.991 "blocks": 2560, 00:16:10.991 "percent": 32 00:16:10.991 } 00:16:10.991 }, 00:16:10.991 "base_bdevs_list": [ 00:16:10.991 { 00:16:10.991 "name": "spare", 00:16:10.991 "uuid": "1e8fa8c4-9ef4-5b83-b911-50ae41bdaadc", 00:16:10.991 "is_configured": true, 00:16:10.991 "data_offset": 256, 00:16:10.991 "data_size": 7936 00:16:10.991 }, 00:16:10.991 { 00:16:10.991 "name": "BaseBdev2", 00:16:10.991 "uuid": "ba07847e-b223-5e32-8d76-6055db686734", 00:16:10.991 "is_configured": true, 00:16:10.991 "data_offset": 256, 00:16:10.991 "data_size": 7936 00:16:10.991 } 00:16:10.991 ] 00:16:10.991 }' 00:16:10.991 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.991 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.991 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.991 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.991 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:10.991 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:10.991 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:10.991 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:10.991 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:10.991 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:10.991 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=569 00:16:10.991 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:10.991 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.991 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.991 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.991 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.991 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.991 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.991 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.991 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.991 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.251 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.251 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.251 "name": "raid_bdev1", 00:16:11.251 "uuid": "46028d26-9ad8-4b15-a516-a9c660902f60", 00:16:11.251 "strip_size_kb": 0, 00:16:11.251 "state": "online", 00:16:11.251 "raid_level": "raid1", 00:16:11.251 "superblock": true, 00:16:11.251 "num_base_bdevs": 2, 00:16:11.251 "num_base_bdevs_discovered": 2, 00:16:11.251 "num_base_bdevs_operational": 2, 00:16:11.251 "process": { 00:16:11.251 "type": "rebuild", 00:16:11.251 "target": "spare", 00:16:11.251 "progress": { 00:16:11.251 "blocks": 2816, 00:16:11.251 "percent": 35 00:16:11.251 } 00:16:11.251 }, 00:16:11.251 "base_bdevs_list": [ 00:16:11.251 { 00:16:11.251 "name": "spare", 00:16:11.251 "uuid": "1e8fa8c4-9ef4-5b83-b911-50ae41bdaadc", 00:16:11.251 "is_configured": true, 00:16:11.251 "data_offset": 256, 00:16:11.251 "data_size": 7936 00:16:11.251 }, 00:16:11.251 { 00:16:11.251 "name": "BaseBdev2", 00:16:11.251 "uuid": "ba07847e-b223-5e32-8d76-6055db686734", 00:16:11.251 "is_configured": true, 00:16:11.251 "data_offset": 256, 00:16:11.251 "data_size": 7936 00:16:11.251 } 00:16:11.251 ] 00:16:11.251 }' 00:16:11.251 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.251 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:11.251 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.251 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.251 12:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:12.188 12:36:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:12.188 12:36:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.189 12:36:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.189 12:36:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.189 12:36:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.189 12:36:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.189 12:36:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.189 12:36:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.189 12:36:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.189 12:36:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.189 12:36:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.189 12:36:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.189 "name": "raid_bdev1", 00:16:12.189 "uuid": "46028d26-9ad8-4b15-a516-a9c660902f60", 00:16:12.189 "strip_size_kb": 0, 00:16:12.189 "state": "online", 00:16:12.189 "raid_level": "raid1", 00:16:12.189 "superblock": true, 00:16:12.189 "num_base_bdevs": 2, 00:16:12.189 "num_base_bdevs_discovered": 2, 00:16:12.189 "num_base_bdevs_operational": 2, 00:16:12.189 "process": { 00:16:12.189 "type": "rebuild", 00:16:12.189 "target": "spare", 00:16:12.189 "progress": { 00:16:12.189 "blocks": 5632, 00:16:12.189 "percent": 70 00:16:12.189 } 00:16:12.189 }, 00:16:12.189 "base_bdevs_list": [ 00:16:12.189 { 00:16:12.189 "name": "spare", 00:16:12.189 "uuid": "1e8fa8c4-9ef4-5b83-b911-50ae41bdaadc", 00:16:12.189 "is_configured": true, 00:16:12.189 "data_offset": 256, 00:16:12.189 "data_size": 7936 00:16:12.189 }, 00:16:12.189 { 00:16:12.189 "name": "BaseBdev2", 00:16:12.189 "uuid": "ba07847e-b223-5e32-8d76-6055db686734", 00:16:12.189 "is_configured": true, 00:16:12.189 "data_offset": 256, 00:16:12.189 "data_size": 7936 00:16:12.189 } 00:16:12.189 ] 00:16:12.189 }' 00:16:12.189 12:36:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.448 12:36:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:12.448 12:36:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.448 12:36:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:12.448 12:36:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:13.016 [2024-11-19 12:36:18.216038] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:13.016 [2024-11-19 12:36:18.216148] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:13.016 [2024-11-19 12:36:18.216276] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.281 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:13.281 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.281 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.281 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.281 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.281 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.281 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.281 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.281 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:13.281 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.281 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.557 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.557 "name": "raid_bdev1", 00:16:13.557 "uuid": "46028d26-9ad8-4b15-a516-a9c660902f60", 00:16:13.557 "strip_size_kb": 0, 00:16:13.557 "state": "online", 00:16:13.557 "raid_level": "raid1", 00:16:13.557 "superblock": true, 00:16:13.557 "num_base_bdevs": 2, 00:16:13.557 "num_base_bdevs_discovered": 2, 00:16:13.557 "num_base_bdevs_operational": 2, 00:16:13.557 "base_bdevs_list": [ 00:16:13.557 { 00:16:13.557 "name": "spare", 00:16:13.557 "uuid": "1e8fa8c4-9ef4-5b83-b911-50ae41bdaadc", 00:16:13.557 "is_configured": true, 00:16:13.557 "data_offset": 256, 00:16:13.557 "data_size": 7936 00:16:13.557 }, 00:16:13.557 { 00:16:13.557 "name": "BaseBdev2", 00:16:13.557 "uuid": "ba07847e-b223-5e32-8d76-6055db686734", 00:16:13.557 "is_configured": true, 00:16:13.557 "data_offset": 256, 00:16:13.557 "data_size": 7936 00:16:13.557 } 00:16:13.557 ] 00:16:13.557 }' 00:16:13.557 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.557 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:13.557 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.557 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:13.557 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:16:13.557 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:13.557 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.557 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:13.557 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:13.557 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.557 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.557 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.557 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.557 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:13.557 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.557 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.557 "name": "raid_bdev1", 00:16:13.557 "uuid": "46028d26-9ad8-4b15-a516-a9c660902f60", 00:16:13.557 "strip_size_kb": 0, 00:16:13.557 "state": "online", 00:16:13.557 "raid_level": "raid1", 00:16:13.557 "superblock": true, 00:16:13.557 "num_base_bdevs": 2, 00:16:13.558 "num_base_bdevs_discovered": 2, 00:16:13.558 "num_base_bdevs_operational": 2, 00:16:13.558 "base_bdevs_list": [ 00:16:13.558 { 00:16:13.558 "name": "spare", 00:16:13.558 "uuid": "1e8fa8c4-9ef4-5b83-b911-50ae41bdaadc", 00:16:13.558 "is_configured": true, 00:16:13.558 "data_offset": 256, 00:16:13.558 "data_size": 7936 00:16:13.558 }, 00:16:13.558 { 00:16:13.558 "name": "BaseBdev2", 00:16:13.558 "uuid": "ba07847e-b223-5e32-8d76-6055db686734", 00:16:13.558 "is_configured": true, 00:16:13.558 "data_offset": 256, 00:16:13.558 "data_size": 7936 00:16:13.558 } 00:16:13.558 ] 00:16:13.558 }' 00:16:13.558 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.558 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:13.558 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.558 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:13.558 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:13.558 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.558 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.558 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:13.558 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:13.558 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:13.558 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.558 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.558 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.558 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.558 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.558 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.558 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.558 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:13.558 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.831 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.831 "name": "raid_bdev1", 00:16:13.831 "uuid": "46028d26-9ad8-4b15-a516-a9c660902f60", 00:16:13.831 "strip_size_kb": 0, 00:16:13.831 "state": "online", 00:16:13.831 "raid_level": "raid1", 00:16:13.831 "superblock": true, 00:16:13.831 "num_base_bdevs": 2, 00:16:13.831 "num_base_bdevs_discovered": 2, 00:16:13.831 "num_base_bdevs_operational": 2, 00:16:13.831 "base_bdevs_list": [ 00:16:13.831 { 00:16:13.831 "name": "spare", 00:16:13.831 "uuid": "1e8fa8c4-9ef4-5b83-b911-50ae41bdaadc", 00:16:13.831 "is_configured": true, 00:16:13.831 "data_offset": 256, 00:16:13.831 "data_size": 7936 00:16:13.831 }, 00:16:13.831 { 00:16:13.831 "name": "BaseBdev2", 00:16:13.831 "uuid": "ba07847e-b223-5e32-8d76-6055db686734", 00:16:13.831 "is_configured": true, 00:16:13.831 "data_offset": 256, 00:16:13.831 "data_size": 7936 00:16:13.831 } 00:16:13.831 ] 00:16:13.831 }' 00:16:13.831 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.831 12:36:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:14.091 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:14.091 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.091 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:14.091 [2024-11-19 12:36:19.242963] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:14.091 [2024-11-19 12:36:19.243003] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:14.091 [2024-11-19 12:36:19.243114] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.091 [2024-11-19 12:36:19.243193] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:14.091 [2024-11-19 12:36:19.243214] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:14.091 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.091 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.091 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:16:14.091 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.091 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:14.091 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.091 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:14.091 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:14.091 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:14.091 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:14.091 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:14.091 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:14.091 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:14.091 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:14.091 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:14.091 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:16:14.091 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:14.091 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:14.092 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:14.350 /dev/nbd0 00:16:14.350 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:14.350 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:14.350 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:14.350 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:16:14.350 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:14.350 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:14.350 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:14.350 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:16:14.350 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:14.350 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:14.350 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:14.350 1+0 records in 00:16:14.350 1+0 records out 00:16:14.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039638 s, 10.3 MB/s 00:16:14.350 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:14.350 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:16:14.350 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:14.350 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:14.350 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:16:14.350 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:14.350 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:14.350 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:14.610 /dev/nbd1 00:16:14.610 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:14.610 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:14.610 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:14.610 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:16:14.610 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:14.610 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:14.610 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:14.610 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:16:14.610 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:14.610 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:14.610 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:14.610 1+0 records in 00:16:14.610 1+0 records out 00:16:14.610 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427553 s, 9.6 MB/s 00:16:14.610 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:14.610 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:16:14.610 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:14.610 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:14.610 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:16:14.610 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:14.610 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:14.610 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:14.869 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:14.869 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:14.869 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:14.869 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:14.869 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:14.870 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:14.870 12:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:15.129 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:15.129 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:15.129 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:15.129 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:15.129 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:15.129 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:15.129 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:15.129 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:15.129 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:15.129 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:15.129 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:15.129 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:15.129 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:15.129 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:15.129 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:15.129 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.389 [2024-11-19 12:36:20.407295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:15.389 [2024-11-19 12:36:20.407370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.389 [2024-11-19 12:36:20.407395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:15.389 [2024-11-19 12:36:20.407408] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.389 [2024-11-19 12:36:20.409667] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.389 [2024-11-19 12:36:20.409715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:15.389 [2024-11-19 12:36:20.409818] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:15.389 [2024-11-19 12:36:20.409875] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:15.389 [2024-11-19 12:36:20.409990] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:15.389 spare 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.389 [2024-11-19 12:36:20.509915] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:16:15.389 [2024-11-19 12:36:20.509969] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:15.389 [2024-11-19 12:36:20.510322] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:16:15.389 [2024-11-19 12:36:20.510513] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:16:15.389 [2024-11-19 12:36:20.510537] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:16:15.389 [2024-11-19 12:36:20.510705] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.389 "name": "raid_bdev1", 00:16:15.389 "uuid": "46028d26-9ad8-4b15-a516-a9c660902f60", 00:16:15.389 "strip_size_kb": 0, 00:16:15.389 "state": "online", 00:16:15.389 "raid_level": "raid1", 00:16:15.389 "superblock": true, 00:16:15.389 "num_base_bdevs": 2, 00:16:15.389 "num_base_bdevs_discovered": 2, 00:16:15.389 "num_base_bdevs_operational": 2, 00:16:15.389 "base_bdevs_list": [ 00:16:15.389 { 00:16:15.389 "name": "spare", 00:16:15.389 "uuid": "1e8fa8c4-9ef4-5b83-b911-50ae41bdaadc", 00:16:15.389 "is_configured": true, 00:16:15.389 "data_offset": 256, 00:16:15.389 "data_size": 7936 00:16:15.389 }, 00:16:15.389 { 00:16:15.389 "name": "BaseBdev2", 00:16:15.389 "uuid": "ba07847e-b223-5e32-8d76-6055db686734", 00:16:15.389 "is_configured": true, 00:16:15.389 "data_offset": 256, 00:16:15.389 "data_size": 7936 00:16:15.389 } 00:16:15.389 ] 00:16:15.389 }' 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.389 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.957 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:15.957 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.957 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:15.957 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:15.957 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.957 12:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.957 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.957 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.957 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.957 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.957 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.957 "name": "raid_bdev1", 00:16:15.957 "uuid": "46028d26-9ad8-4b15-a516-a9c660902f60", 00:16:15.958 "strip_size_kb": 0, 00:16:15.958 "state": "online", 00:16:15.958 "raid_level": "raid1", 00:16:15.958 "superblock": true, 00:16:15.958 "num_base_bdevs": 2, 00:16:15.958 "num_base_bdevs_discovered": 2, 00:16:15.958 "num_base_bdevs_operational": 2, 00:16:15.958 "base_bdevs_list": [ 00:16:15.958 { 00:16:15.958 "name": "spare", 00:16:15.958 "uuid": "1e8fa8c4-9ef4-5b83-b911-50ae41bdaadc", 00:16:15.958 "is_configured": true, 00:16:15.958 "data_offset": 256, 00:16:15.958 "data_size": 7936 00:16:15.958 }, 00:16:15.958 { 00:16:15.958 "name": "BaseBdev2", 00:16:15.958 "uuid": "ba07847e-b223-5e32-8d76-6055db686734", 00:16:15.958 "is_configured": true, 00:16:15.958 "data_offset": 256, 00:16:15.958 "data_size": 7936 00:16:15.958 } 00:16:15.958 ] 00:16:15.958 }' 00:16:15.958 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.958 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:15.958 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.958 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:15.958 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.958 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:15.958 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.958 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.958 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.958 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.958 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:15.958 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.958 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.958 [2024-11-19 12:36:21.206070] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:15.958 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.958 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:15.958 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.958 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.958 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:15.958 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:15.958 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:15.958 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.958 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.958 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.958 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.218 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.218 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.218 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.218 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:16.218 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.218 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.218 "name": "raid_bdev1", 00:16:16.218 "uuid": "46028d26-9ad8-4b15-a516-a9c660902f60", 00:16:16.218 "strip_size_kb": 0, 00:16:16.218 "state": "online", 00:16:16.218 "raid_level": "raid1", 00:16:16.218 "superblock": true, 00:16:16.218 "num_base_bdevs": 2, 00:16:16.218 "num_base_bdevs_discovered": 1, 00:16:16.218 "num_base_bdevs_operational": 1, 00:16:16.218 "base_bdevs_list": [ 00:16:16.218 { 00:16:16.218 "name": null, 00:16:16.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.218 "is_configured": false, 00:16:16.218 "data_offset": 0, 00:16:16.218 "data_size": 7936 00:16:16.218 }, 00:16:16.218 { 00:16:16.218 "name": "BaseBdev2", 00:16:16.218 "uuid": "ba07847e-b223-5e32-8d76-6055db686734", 00:16:16.218 "is_configured": true, 00:16:16.218 "data_offset": 256, 00:16:16.218 "data_size": 7936 00:16:16.218 } 00:16:16.218 ] 00:16:16.218 }' 00:16:16.218 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.218 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:16.477 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:16.478 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.478 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:16.478 [2024-11-19 12:36:21.653302] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:16.478 [2024-11-19 12:36:21.653501] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:16.478 [2024-11-19 12:36:21.653515] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:16.478 [2024-11-19 12:36:21.653557] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:16.478 [2024-11-19 12:36:21.657552] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:16:16.478 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.478 12:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:16.478 [2024-11-19 12:36:21.659499] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:17.415 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.415 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.415 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.415 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.415 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.415 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.415 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.415 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.415 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:17.674 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.674 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.674 "name": "raid_bdev1", 00:16:17.674 "uuid": "46028d26-9ad8-4b15-a516-a9c660902f60", 00:16:17.674 "strip_size_kb": 0, 00:16:17.674 "state": "online", 00:16:17.674 "raid_level": "raid1", 00:16:17.674 "superblock": true, 00:16:17.674 "num_base_bdevs": 2, 00:16:17.674 "num_base_bdevs_discovered": 2, 00:16:17.674 "num_base_bdevs_operational": 2, 00:16:17.674 "process": { 00:16:17.674 "type": "rebuild", 00:16:17.674 "target": "spare", 00:16:17.674 "progress": { 00:16:17.674 "blocks": 2560, 00:16:17.674 "percent": 32 00:16:17.674 } 00:16:17.674 }, 00:16:17.674 "base_bdevs_list": [ 00:16:17.674 { 00:16:17.674 "name": "spare", 00:16:17.674 "uuid": "1e8fa8c4-9ef4-5b83-b911-50ae41bdaadc", 00:16:17.674 "is_configured": true, 00:16:17.674 "data_offset": 256, 00:16:17.674 "data_size": 7936 00:16:17.674 }, 00:16:17.674 { 00:16:17.674 "name": "BaseBdev2", 00:16:17.674 "uuid": "ba07847e-b223-5e32-8d76-6055db686734", 00:16:17.674 "is_configured": true, 00:16:17.674 "data_offset": 256, 00:16:17.674 "data_size": 7936 00:16:17.674 } 00:16:17.674 ] 00:16:17.674 }' 00:16:17.674 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.674 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:17.674 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.674 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.674 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:17.674 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.674 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:17.674 [2024-11-19 12:36:22.824711] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:17.674 [2024-11-19 12:36:22.864567] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:17.674 [2024-11-19 12:36:22.864650] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.674 [2024-11-19 12:36:22.864667] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:17.674 [2024-11-19 12:36:22.864675] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:17.674 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.674 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:17.674 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.674 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.674 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:17.674 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:17.674 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:17.674 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.674 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.674 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.674 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.674 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.674 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.674 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.675 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:17.675 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.675 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.675 "name": "raid_bdev1", 00:16:17.675 "uuid": "46028d26-9ad8-4b15-a516-a9c660902f60", 00:16:17.675 "strip_size_kb": 0, 00:16:17.675 "state": "online", 00:16:17.675 "raid_level": "raid1", 00:16:17.675 "superblock": true, 00:16:17.675 "num_base_bdevs": 2, 00:16:17.675 "num_base_bdevs_discovered": 1, 00:16:17.675 "num_base_bdevs_operational": 1, 00:16:17.675 "base_bdevs_list": [ 00:16:17.675 { 00:16:17.675 "name": null, 00:16:17.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.675 "is_configured": false, 00:16:17.675 "data_offset": 0, 00:16:17.675 "data_size": 7936 00:16:17.675 }, 00:16:17.675 { 00:16:17.675 "name": "BaseBdev2", 00:16:17.675 "uuid": "ba07847e-b223-5e32-8d76-6055db686734", 00:16:17.675 "is_configured": true, 00:16:17.675 "data_offset": 256, 00:16:17.675 "data_size": 7936 00:16:17.675 } 00:16:17.675 ] 00:16:17.675 }' 00:16:17.675 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.675 12:36:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:18.243 12:36:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:18.243 12:36:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.243 12:36:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:18.243 [2024-11-19 12:36:23.292256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:18.243 [2024-11-19 12:36:23.292331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.243 [2024-11-19 12:36:23.292361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:18.243 [2024-11-19 12:36:23.292370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.243 [2024-11-19 12:36:23.292862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.243 [2024-11-19 12:36:23.292889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:18.243 [2024-11-19 12:36:23.292982] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:18.243 [2024-11-19 12:36:23.293000] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:18.243 [2024-11-19 12:36:23.293018] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:18.243 [2024-11-19 12:36:23.293038] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:18.243 [2024-11-19 12:36:23.297071] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:16:18.243 spare 00:16:18.243 12:36:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.243 12:36:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:18.243 [2024-11-19 12:36:23.299027] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:19.180 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:19.180 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.180 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:19.180 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:19.180 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.180 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.180 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.180 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.180 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:19.180 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.180 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.180 "name": "raid_bdev1", 00:16:19.180 "uuid": "46028d26-9ad8-4b15-a516-a9c660902f60", 00:16:19.180 "strip_size_kb": 0, 00:16:19.180 "state": "online", 00:16:19.180 "raid_level": "raid1", 00:16:19.180 "superblock": true, 00:16:19.180 "num_base_bdevs": 2, 00:16:19.180 "num_base_bdevs_discovered": 2, 00:16:19.180 "num_base_bdevs_operational": 2, 00:16:19.180 "process": { 00:16:19.180 "type": "rebuild", 00:16:19.180 "target": "spare", 00:16:19.180 "progress": { 00:16:19.180 "blocks": 2560, 00:16:19.180 "percent": 32 00:16:19.180 } 00:16:19.180 }, 00:16:19.180 "base_bdevs_list": [ 00:16:19.180 { 00:16:19.180 "name": "spare", 00:16:19.180 "uuid": "1e8fa8c4-9ef4-5b83-b911-50ae41bdaadc", 00:16:19.180 "is_configured": true, 00:16:19.180 "data_offset": 256, 00:16:19.180 "data_size": 7936 00:16:19.180 }, 00:16:19.180 { 00:16:19.180 "name": "BaseBdev2", 00:16:19.180 "uuid": "ba07847e-b223-5e32-8d76-6055db686734", 00:16:19.180 "is_configured": true, 00:16:19.180 "data_offset": 256, 00:16:19.180 "data_size": 7936 00:16:19.180 } 00:16:19.180 ] 00:16:19.180 }' 00:16:19.180 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.180 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:19.180 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.180 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:19.180 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:19.180 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.180 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:19.439 [2024-11-19 12:36:24.443730] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:19.439 [2024-11-19 12:36:24.504087] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:19.439 [2024-11-19 12:36:24.504181] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.439 [2024-11-19 12:36:24.504196] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:19.439 [2024-11-19 12:36:24.504205] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:19.439 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.439 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:19.439 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.439 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.439 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:19.439 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:19.439 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:19.439 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.439 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.439 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.439 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.439 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.439 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.439 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:19.439 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.439 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.439 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.439 "name": "raid_bdev1", 00:16:19.439 "uuid": "46028d26-9ad8-4b15-a516-a9c660902f60", 00:16:19.439 "strip_size_kb": 0, 00:16:19.439 "state": "online", 00:16:19.439 "raid_level": "raid1", 00:16:19.439 "superblock": true, 00:16:19.439 "num_base_bdevs": 2, 00:16:19.439 "num_base_bdevs_discovered": 1, 00:16:19.439 "num_base_bdevs_operational": 1, 00:16:19.439 "base_bdevs_list": [ 00:16:19.439 { 00:16:19.439 "name": null, 00:16:19.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.439 "is_configured": false, 00:16:19.439 "data_offset": 0, 00:16:19.439 "data_size": 7936 00:16:19.439 }, 00:16:19.439 { 00:16:19.439 "name": "BaseBdev2", 00:16:19.439 "uuid": "ba07847e-b223-5e32-8d76-6055db686734", 00:16:19.439 "is_configured": true, 00:16:19.439 "data_offset": 256, 00:16:19.439 "data_size": 7936 00:16:19.439 } 00:16:19.439 ] 00:16:19.439 }' 00:16:19.439 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.439 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:20.008 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:20.008 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.008 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:20.008 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:20.008 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.008 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.008 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.008 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.008 12:36:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:20.008 12:36:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.008 12:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.008 "name": "raid_bdev1", 00:16:20.008 "uuid": "46028d26-9ad8-4b15-a516-a9c660902f60", 00:16:20.008 "strip_size_kb": 0, 00:16:20.008 "state": "online", 00:16:20.008 "raid_level": "raid1", 00:16:20.008 "superblock": true, 00:16:20.008 "num_base_bdevs": 2, 00:16:20.008 "num_base_bdevs_discovered": 1, 00:16:20.008 "num_base_bdevs_operational": 1, 00:16:20.008 "base_bdevs_list": [ 00:16:20.008 { 00:16:20.008 "name": null, 00:16:20.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.008 "is_configured": false, 00:16:20.008 "data_offset": 0, 00:16:20.008 "data_size": 7936 00:16:20.008 }, 00:16:20.008 { 00:16:20.008 "name": "BaseBdev2", 00:16:20.008 "uuid": "ba07847e-b223-5e32-8d76-6055db686734", 00:16:20.008 "is_configured": true, 00:16:20.008 "data_offset": 256, 00:16:20.008 "data_size": 7936 00:16:20.008 } 00:16:20.008 ] 00:16:20.008 }' 00:16:20.008 12:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.008 12:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:20.008 12:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.008 12:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:20.008 12:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:20.008 12:36:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.008 12:36:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:20.008 12:36:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.008 12:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:20.008 12:36:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.008 12:36:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:20.008 [2024-11-19 12:36:25.127579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:20.008 [2024-11-19 12:36:25.127661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.008 [2024-11-19 12:36:25.127683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:20.008 [2024-11-19 12:36:25.127694] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.008 [2024-11-19 12:36:25.128120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.008 [2024-11-19 12:36:25.128154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:20.008 [2024-11-19 12:36:25.128239] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:20.008 [2024-11-19 12:36:25.128266] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:20.008 [2024-11-19 12:36:25.128278] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:20.008 [2024-11-19 12:36:25.128294] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:20.008 BaseBdev1 00:16:20.008 12:36:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.008 12:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:20.948 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:20.948 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.948 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.948 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:20.948 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:20.948 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:20.948 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.948 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.948 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.948 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.948 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.948 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.948 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.948 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:20.948 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.948 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.948 "name": "raid_bdev1", 00:16:20.948 "uuid": "46028d26-9ad8-4b15-a516-a9c660902f60", 00:16:20.948 "strip_size_kb": 0, 00:16:20.948 "state": "online", 00:16:20.948 "raid_level": "raid1", 00:16:20.948 "superblock": true, 00:16:20.948 "num_base_bdevs": 2, 00:16:20.948 "num_base_bdevs_discovered": 1, 00:16:20.948 "num_base_bdevs_operational": 1, 00:16:20.948 "base_bdevs_list": [ 00:16:20.948 { 00:16:20.948 "name": null, 00:16:20.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.948 "is_configured": false, 00:16:20.948 "data_offset": 0, 00:16:20.948 "data_size": 7936 00:16:20.948 }, 00:16:20.948 { 00:16:20.948 "name": "BaseBdev2", 00:16:20.948 "uuid": "ba07847e-b223-5e32-8d76-6055db686734", 00:16:20.948 "is_configured": true, 00:16:20.948 "data_offset": 256, 00:16:20.948 "data_size": 7936 00:16:20.948 } 00:16:20.948 ] 00:16:20.948 }' 00:16:20.948 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.948 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:21.516 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:21.516 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.516 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:21.516 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:21.516 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.516 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.516 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.516 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.516 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:21.516 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.516 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.516 "name": "raid_bdev1", 00:16:21.516 "uuid": "46028d26-9ad8-4b15-a516-a9c660902f60", 00:16:21.517 "strip_size_kb": 0, 00:16:21.517 "state": "online", 00:16:21.517 "raid_level": "raid1", 00:16:21.517 "superblock": true, 00:16:21.517 "num_base_bdevs": 2, 00:16:21.517 "num_base_bdevs_discovered": 1, 00:16:21.517 "num_base_bdevs_operational": 1, 00:16:21.517 "base_bdevs_list": [ 00:16:21.517 { 00:16:21.517 "name": null, 00:16:21.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.517 "is_configured": false, 00:16:21.517 "data_offset": 0, 00:16:21.517 "data_size": 7936 00:16:21.517 }, 00:16:21.517 { 00:16:21.517 "name": "BaseBdev2", 00:16:21.517 "uuid": "ba07847e-b223-5e32-8d76-6055db686734", 00:16:21.517 "is_configured": true, 00:16:21.517 "data_offset": 256, 00:16:21.517 "data_size": 7936 00:16:21.517 } 00:16:21.517 ] 00:16:21.517 }' 00:16:21.517 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.517 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:21.517 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.517 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:21.517 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:21.517 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:16:21.517 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:21.517 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:21.517 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.517 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:21.517 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.517 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:21.517 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.517 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:21.517 [2024-11-19 12:36:26.732895] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:21.517 [2024-11-19 12:36:26.733084] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:21.517 [2024-11-19 12:36:26.733107] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:21.517 request: 00:16:21.517 { 00:16:21.517 "base_bdev": "BaseBdev1", 00:16:21.517 "raid_bdev": "raid_bdev1", 00:16:21.517 "method": "bdev_raid_add_base_bdev", 00:16:21.517 "req_id": 1 00:16:21.517 } 00:16:21.517 Got JSON-RPC error response 00:16:21.517 response: 00:16:21.517 { 00:16:21.517 "code": -22, 00:16:21.517 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:21.517 } 00:16:21.517 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:21.517 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:16:21.517 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:21.517 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:21.517 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:21.517 12:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:22.894 12:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:22.894 12:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.894 12:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.894 12:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:22.894 12:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:22.894 12:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:22.894 12:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.894 12:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.894 12:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.894 12:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.894 12:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.894 12:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.894 12:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:22.894 12:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.894 12:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.894 12:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.894 "name": "raid_bdev1", 00:16:22.894 "uuid": "46028d26-9ad8-4b15-a516-a9c660902f60", 00:16:22.894 "strip_size_kb": 0, 00:16:22.894 "state": "online", 00:16:22.894 "raid_level": "raid1", 00:16:22.894 "superblock": true, 00:16:22.894 "num_base_bdevs": 2, 00:16:22.894 "num_base_bdevs_discovered": 1, 00:16:22.894 "num_base_bdevs_operational": 1, 00:16:22.894 "base_bdevs_list": [ 00:16:22.894 { 00:16:22.894 "name": null, 00:16:22.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.894 "is_configured": false, 00:16:22.894 "data_offset": 0, 00:16:22.894 "data_size": 7936 00:16:22.894 }, 00:16:22.894 { 00:16:22.894 "name": "BaseBdev2", 00:16:22.894 "uuid": "ba07847e-b223-5e32-8d76-6055db686734", 00:16:22.894 "is_configured": true, 00:16:22.894 "data_offset": 256, 00:16:22.894 "data_size": 7936 00:16:22.894 } 00:16:22.894 ] 00:16:22.894 }' 00:16:22.894 12:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.894 12:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:23.153 12:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:23.153 12:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.153 12:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:23.153 12:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:23.153 12:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.153 12:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.153 12:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.153 12:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.153 12:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:23.153 12:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.153 12:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.153 "name": "raid_bdev1", 00:16:23.153 "uuid": "46028d26-9ad8-4b15-a516-a9c660902f60", 00:16:23.153 "strip_size_kb": 0, 00:16:23.153 "state": "online", 00:16:23.153 "raid_level": "raid1", 00:16:23.153 "superblock": true, 00:16:23.153 "num_base_bdevs": 2, 00:16:23.153 "num_base_bdevs_discovered": 1, 00:16:23.153 "num_base_bdevs_operational": 1, 00:16:23.153 "base_bdevs_list": [ 00:16:23.153 { 00:16:23.153 "name": null, 00:16:23.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.153 "is_configured": false, 00:16:23.153 "data_offset": 0, 00:16:23.153 "data_size": 7936 00:16:23.153 }, 00:16:23.153 { 00:16:23.153 "name": "BaseBdev2", 00:16:23.153 "uuid": "ba07847e-b223-5e32-8d76-6055db686734", 00:16:23.153 "is_configured": true, 00:16:23.153 "data_offset": 256, 00:16:23.153 "data_size": 7936 00:16:23.153 } 00:16:23.153 ] 00:16:23.153 }' 00:16:23.153 12:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.153 12:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:23.153 12:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.153 12:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:23.153 12:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 97076 00:16:23.153 12:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 97076 ']' 00:16:23.153 12:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 97076 00:16:23.153 12:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:16:23.153 12:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:23.153 12:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97076 00:16:23.413 12:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:23.413 12:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:23.413 killing process with pid 97076 00:16:23.413 12:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97076' 00:16:23.413 12:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 97076 00:16:23.413 Received shutdown signal, test time was about 60.000000 seconds 00:16:23.413 00:16:23.413 Latency(us) 00:16:23.413 [2024-11-19T12:36:28.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.413 [2024-11-19T12:36:28.674Z] =================================================================================================================== 00:16:23.413 [2024-11-19T12:36:28.674Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:23.413 [2024-11-19 12:36:28.430819] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:23.413 [2024-11-19 12:36:28.430957] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:23.413 [2024-11-19 12:36:28.431023] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:23.413 [2024-11-19 12:36:28.431033] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:16:23.413 12:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 97076 00:16:23.413 [2024-11-19 12:36:28.462768] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:23.672 12:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:16:23.672 00:16:23.672 real 0m18.486s 00:16:23.672 user 0m24.462s 00:16:23.672 sys 0m2.848s 00:16:23.672 12:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:23.672 12:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:23.672 ************************************ 00:16:23.672 END TEST raid_rebuild_test_sb_4k 00:16:23.672 ************************************ 00:16:23.672 12:36:28 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:16:23.672 12:36:28 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:16:23.672 12:36:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:23.672 12:36:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:23.672 12:36:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:23.672 ************************************ 00:16:23.672 START TEST raid_state_function_test_sb_md_separate 00:16:23.672 ************************************ 00:16:23.672 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:16:23.672 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:23.672 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:23.672 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:23.672 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:23.672 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:23.672 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:23.672 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:23.672 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:23.672 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:23.672 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:23.672 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:23.672 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:23.672 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:23.672 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:23.673 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:23.673 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:23.673 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:23.673 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:23.673 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:23.673 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:23.673 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:23.673 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:23.673 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=97751 00:16:23.673 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:23.673 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 97751' 00:16:23.673 Process raid pid: 97751 00:16:23.673 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 97751 00:16:23.673 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97751 ']' 00:16:23.673 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.673 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:23.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.673 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.673 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:23.673 12:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.673 [2024-11-19 12:36:28.873289] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:23.673 [2024-11-19 12:36:28.873449] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.932 [2024-11-19 12:36:29.026276] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.932 [2024-11-19 12:36:29.077705] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.932 [2024-11-19 12:36:29.119128] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:23.932 [2024-11-19 12:36:29.119258] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:24.500 12:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:24.500 12:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:16:24.500 12:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:24.500 12:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.500 12:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.500 [2024-11-19 12:36:29.720445] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:24.500 [2024-11-19 12:36:29.720505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:24.500 [2024-11-19 12:36:29.720517] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:24.500 [2024-11-19 12:36:29.720527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:24.500 12:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.500 12:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:24.500 12:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.500 12:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.500 12:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:24.500 12:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:24.500 12:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:24.500 12:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.500 12:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.500 12:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.500 12:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.500 12:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.500 12:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.500 12:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.500 12:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.500 12:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.759 12:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.759 "name": "Existed_Raid", 00:16:24.759 "uuid": "c5ef96f2-5f21-45d8-aa85-3a00040cdb2c", 00:16:24.759 "strip_size_kb": 0, 00:16:24.759 "state": "configuring", 00:16:24.759 "raid_level": "raid1", 00:16:24.759 "superblock": true, 00:16:24.759 "num_base_bdevs": 2, 00:16:24.759 "num_base_bdevs_discovered": 0, 00:16:24.759 "num_base_bdevs_operational": 2, 00:16:24.759 "base_bdevs_list": [ 00:16:24.759 { 00:16:24.759 "name": "BaseBdev1", 00:16:24.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.759 "is_configured": false, 00:16:24.759 "data_offset": 0, 00:16:24.759 "data_size": 0 00:16:24.759 }, 00:16:24.759 { 00:16:24.759 "name": "BaseBdev2", 00:16:24.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.759 "is_configured": false, 00:16:24.759 "data_offset": 0, 00:16:24.759 "data_size": 0 00:16:24.759 } 00:16:24.759 ] 00:16:24.759 }' 00:16:24.759 12:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.759 12:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.019 [2024-11-19 12:36:30.179557] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:25.019 [2024-11-19 12:36:30.179674] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.019 [2024-11-19 12:36:30.191565] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:25.019 [2024-11-19 12:36:30.191654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:25.019 [2024-11-19 12:36:30.191682] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:25.019 [2024-11-19 12:36:30.191705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.019 [2024-11-19 12:36:30.212995] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:25.019 BaseBdev1 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.019 [ 00:16:25.019 { 00:16:25.019 "name": "BaseBdev1", 00:16:25.019 "aliases": [ 00:16:25.019 "e157005e-d571-41c5-937b-5d4f57f63d36" 00:16:25.019 ], 00:16:25.019 "product_name": "Malloc disk", 00:16:25.019 "block_size": 4096, 00:16:25.019 "num_blocks": 8192, 00:16:25.019 "uuid": "e157005e-d571-41c5-937b-5d4f57f63d36", 00:16:25.019 "md_size": 32, 00:16:25.019 "md_interleave": false, 00:16:25.019 "dif_type": 0, 00:16:25.019 "assigned_rate_limits": { 00:16:25.019 "rw_ios_per_sec": 0, 00:16:25.019 "rw_mbytes_per_sec": 0, 00:16:25.019 "r_mbytes_per_sec": 0, 00:16:25.019 "w_mbytes_per_sec": 0 00:16:25.019 }, 00:16:25.019 "claimed": true, 00:16:25.019 "claim_type": "exclusive_write", 00:16:25.019 "zoned": false, 00:16:25.019 "supported_io_types": { 00:16:25.019 "read": true, 00:16:25.019 "write": true, 00:16:25.019 "unmap": true, 00:16:25.019 "flush": true, 00:16:25.019 "reset": true, 00:16:25.019 "nvme_admin": false, 00:16:25.019 "nvme_io": false, 00:16:25.019 "nvme_io_md": false, 00:16:25.019 "write_zeroes": true, 00:16:25.019 "zcopy": true, 00:16:25.019 "get_zone_info": false, 00:16:25.019 "zone_management": false, 00:16:25.019 "zone_append": false, 00:16:25.019 "compare": false, 00:16:25.019 "compare_and_write": false, 00:16:25.019 "abort": true, 00:16:25.019 "seek_hole": false, 00:16:25.019 "seek_data": false, 00:16:25.019 "copy": true, 00:16:25.019 "nvme_iov_md": false 00:16:25.019 }, 00:16:25.019 "memory_domains": [ 00:16:25.019 { 00:16:25.019 "dma_device_id": "system", 00:16:25.019 "dma_device_type": 1 00:16:25.019 }, 00:16:25.019 { 00:16:25.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.019 "dma_device_type": 2 00:16:25.019 } 00:16:25.019 ], 00:16:25.019 "driver_specific": {} 00:16:25.019 } 00:16:25.019 ] 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.019 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.020 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.020 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.020 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.020 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.279 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.279 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.279 "name": "Existed_Raid", 00:16:25.279 "uuid": "4783c876-fb42-49f7-9b29-1d4d69bbcfce", 00:16:25.279 "strip_size_kb": 0, 00:16:25.279 "state": "configuring", 00:16:25.279 "raid_level": "raid1", 00:16:25.279 "superblock": true, 00:16:25.279 "num_base_bdevs": 2, 00:16:25.279 "num_base_bdevs_discovered": 1, 00:16:25.279 "num_base_bdevs_operational": 2, 00:16:25.279 "base_bdevs_list": [ 00:16:25.279 { 00:16:25.279 "name": "BaseBdev1", 00:16:25.279 "uuid": "e157005e-d571-41c5-937b-5d4f57f63d36", 00:16:25.279 "is_configured": true, 00:16:25.279 "data_offset": 256, 00:16:25.279 "data_size": 7936 00:16:25.279 }, 00:16:25.279 { 00:16:25.279 "name": "BaseBdev2", 00:16:25.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.279 "is_configured": false, 00:16:25.279 "data_offset": 0, 00:16:25.279 "data_size": 0 00:16:25.279 } 00:16:25.279 ] 00:16:25.279 }' 00:16:25.279 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.279 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.539 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:25.539 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.539 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.539 [2024-11-19 12:36:30.688306] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:25.539 [2024-11-19 12:36:30.688440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:16:25.539 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.539 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:25.539 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.539 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.539 [2024-11-19 12:36:30.700383] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:25.539 [2024-11-19 12:36:30.702338] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:25.539 [2024-11-19 12:36:30.702440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:25.539 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.539 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:25.539 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:25.539 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:25.539 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.539 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.539 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.539 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.539 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:25.539 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.539 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.539 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.539 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.539 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.539 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.539 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.539 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.539 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.539 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.539 "name": "Existed_Raid", 00:16:25.539 "uuid": "5facd648-72ec-4812-8d24-693dfe3747d6", 00:16:25.539 "strip_size_kb": 0, 00:16:25.539 "state": "configuring", 00:16:25.539 "raid_level": "raid1", 00:16:25.539 "superblock": true, 00:16:25.539 "num_base_bdevs": 2, 00:16:25.539 "num_base_bdevs_discovered": 1, 00:16:25.539 "num_base_bdevs_operational": 2, 00:16:25.539 "base_bdevs_list": [ 00:16:25.539 { 00:16:25.540 "name": "BaseBdev1", 00:16:25.540 "uuid": "e157005e-d571-41c5-937b-5d4f57f63d36", 00:16:25.540 "is_configured": true, 00:16:25.540 "data_offset": 256, 00:16:25.540 "data_size": 7936 00:16:25.540 }, 00:16:25.540 { 00:16:25.540 "name": "BaseBdev2", 00:16:25.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.540 "is_configured": false, 00:16:25.540 "data_offset": 0, 00:16:25.540 "data_size": 0 00:16:25.540 } 00:16:25.540 ] 00:16:25.540 }' 00:16:25.540 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.540 12:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.109 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:16:26.109 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.109 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.109 [2024-11-19 12:36:31.182789] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:26.109 [2024-11-19 12:36:31.183035] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:26.109 [2024-11-19 12:36:31.183053] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:26.109 [2024-11-19 12:36:31.183187] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:26.109 [2024-11-19 12:36:31.183312] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:26.109 [2024-11-19 12:36:31.183329] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:16:26.109 [2024-11-19 12:36:31.183422] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.109 BaseBdev2 00:16:26.109 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.109 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:26.109 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:26.109 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:26.109 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:16:26.109 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:26.109 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:26.109 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:26.109 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.109 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.109 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.109 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:26.109 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.110 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.110 [ 00:16:26.110 { 00:16:26.110 "name": "BaseBdev2", 00:16:26.110 "aliases": [ 00:16:26.110 "aafdc25b-387b-4f2e-ba17-15895a07c88e" 00:16:26.110 ], 00:16:26.110 "product_name": "Malloc disk", 00:16:26.110 "block_size": 4096, 00:16:26.110 "num_blocks": 8192, 00:16:26.110 "uuid": "aafdc25b-387b-4f2e-ba17-15895a07c88e", 00:16:26.110 "md_size": 32, 00:16:26.110 "md_interleave": false, 00:16:26.110 "dif_type": 0, 00:16:26.110 "assigned_rate_limits": { 00:16:26.110 "rw_ios_per_sec": 0, 00:16:26.110 "rw_mbytes_per_sec": 0, 00:16:26.110 "r_mbytes_per_sec": 0, 00:16:26.110 "w_mbytes_per_sec": 0 00:16:26.110 }, 00:16:26.110 "claimed": true, 00:16:26.110 "claim_type": "exclusive_write", 00:16:26.110 "zoned": false, 00:16:26.110 "supported_io_types": { 00:16:26.110 "read": true, 00:16:26.110 "write": true, 00:16:26.110 "unmap": true, 00:16:26.110 "flush": true, 00:16:26.110 "reset": true, 00:16:26.110 "nvme_admin": false, 00:16:26.110 "nvme_io": false, 00:16:26.110 "nvme_io_md": false, 00:16:26.110 "write_zeroes": true, 00:16:26.110 "zcopy": true, 00:16:26.110 "get_zone_info": false, 00:16:26.110 "zone_management": false, 00:16:26.110 "zone_append": false, 00:16:26.110 "compare": false, 00:16:26.110 "compare_and_write": false, 00:16:26.110 "abort": true, 00:16:26.110 "seek_hole": false, 00:16:26.110 "seek_data": false, 00:16:26.110 "copy": true, 00:16:26.110 "nvme_iov_md": false 00:16:26.110 }, 00:16:26.110 "memory_domains": [ 00:16:26.110 { 00:16:26.110 "dma_device_id": "system", 00:16:26.110 "dma_device_type": 1 00:16:26.110 }, 00:16:26.110 { 00:16:26.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.110 "dma_device_type": 2 00:16:26.110 } 00:16:26.110 ], 00:16:26.110 "driver_specific": {} 00:16:26.110 } 00:16:26.110 ] 00:16:26.110 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.110 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:16:26.110 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:26.110 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:26.110 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:26.110 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.110 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.110 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.110 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.110 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:26.110 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.110 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.110 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.110 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.110 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.110 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.110 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.110 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.110 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.110 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.110 "name": "Existed_Raid", 00:16:26.110 "uuid": "5facd648-72ec-4812-8d24-693dfe3747d6", 00:16:26.110 "strip_size_kb": 0, 00:16:26.110 "state": "online", 00:16:26.110 "raid_level": "raid1", 00:16:26.110 "superblock": true, 00:16:26.110 "num_base_bdevs": 2, 00:16:26.110 "num_base_bdevs_discovered": 2, 00:16:26.110 "num_base_bdevs_operational": 2, 00:16:26.110 "base_bdevs_list": [ 00:16:26.110 { 00:16:26.110 "name": "BaseBdev1", 00:16:26.110 "uuid": "e157005e-d571-41c5-937b-5d4f57f63d36", 00:16:26.110 "is_configured": true, 00:16:26.110 "data_offset": 256, 00:16:26.110 "data_size": 7936 00:16:26.110 }, 00:16:26.110 { 00:16:26.110 "name": "BaseBdev2", 00:16:26.110 "uuid": "aafdc25b-387b-4f2e-ba17-15895a07c88e", 00:16:26.110 "is_configured": true, 00:16:26.110 "data_offset": 256, 00:16:26.110 "data_size": 7936 00:16:26.110 } 00:16:26.110 ] 00:16:26.110 }' 00:16:26.110 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.110 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:26.679 [2024-11-19 12:36:31.678271] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:26.679 "name": "Existed_Raid", 00:16:26.679 "aliases": [ 00:16:26.679 "5facd648-72ec-4812-8d24-693dfe3747d6" 00:16:26.679 ], 00:16:26.679 "product_name": "Raid Volume", 00:16:26.679 "block_size": 4096, 00:16:26.679 "num_blocks": 7936, 00:16:26.679 "uuid": "5facd648-72ec-4812-8d24-693dfe3747d6", 00:16:26.679 "md_size": 32, 00:16:26.679 "md_interleave": false, 00:16:26.679 "dif_type": 0, 00:16:26.679 "assigned_rate_limits": { 00:16:26.679 "rw_ios_per_sec": 0, 00:16:26.679 "rw_mbytes_per_sec": 0, 00:16:26.679 "r_mbytes_per_sec": 0, 00:16:26.679 "w_mbytes_per_sec": 0 00:16:26.679 }, 00:16:26.679 "claimed": false, 00:16:26.679 "zoned": false, 00:16:26.679 "supported_io_types": { 00:16:26.679 "read": true, 00:16:26.679 "write": true, 00:16:26.679 "unmap": false, 00:16:26.679 "flush": false, 00:16:26.679 "reset": true, 00:16:26.679 "nvme_admin": false, 00:16:26.679 "nvme_io": false, 00:16:26.679 "nvme_io_md": false, 00:16:26.679 "write_zeroes": true, 00:16:26.679 "zcopy": false, 00:16:26.679 "get_zone_info": false, 00:16:26.679 "zone_management": false, 00:16:26.679 "zone_append": false, 00:16:26.679 "compare": false, 00:16:26.679 "compare_and_write": false, 00:16:26.679 "abort": false, 00:16:26.679 "seek_hole": false, 00:16:26.679 "seek_data": false, 00:16:26.679 "copy": false, 00:16:26.679 "nvme_iov_md": false 00:16:26.679 }, 00:16:26.679 "memory_domains": [ 00:16:26.679 { 00:16:26.679 "dma_device_id": "system", 00:16:26.679 "dma_device_type": 1 00:16:26.679 }, 00:16:26.679 { 00:16:26.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.679 "dma_device_type": 2 00:16:26.679 }, 00:16:26.679 { 00:16:26.679 "dma_device_id": "system", 00:16:26.679 "dma_device_type": 1 00:16:26.679 }, 00:16:26.679 { 00:16:26.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.679 "dma_device_type": 2 00:16:26.679 } 00:16:26.679 ], 00:16:26.679 "driver_specific": { 00:16:26.679 "raid": { 00:16:26.679 "uuid": "5facd648-72ec-4812-8d24-693dfe3747d6", 00:16:26.679 "strip_size_kb": 0, 00:16:26.679 "state": "online", 00:16:26.679 "raid_level": "raid1", 00:16:26.679 "superblock": true, 00:16:26.679 "num_base_bdevs": 2, 00:16:26.679 "num_base_bdevs_discovered": 2, 00:16:26.679 "num_base_bdevs_operational": 2, 00:16:26.679 "base_bdevs_list": [ 00:16:26.679 { 00:16:26.679 "name": "BaseBdev1", 00:16:26.679 "uuid": "e157005e-d571-41c5-937b-5d4f57f63d36", 00:16:26.679 "is_configured": true, 00:16:26.679 "data_offset": 256, 00:16:26.679 "data_size": 7936 00:16:26.679 }, 00:16:26.679 { 00:16:26.679 "name": "BaseBdev2", 00:16:26.679 "uuid": "aafdc25b-387b-4f2e-ba17-15895a07c88e", 00:16:26.679 "is_configured": true, 00:16:26.679 "data_offset": 256, 00:16:26.679 "data_size": 7936 00:16:26.679 } 00:16:26.679 ] 00:16:26.679 } 00:16:26.679 } 00:16:26.679 }' 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:26.679 BaseBdev2' 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.679 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.680 [2024-11-19 12:36:31.921654] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:26.680 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.939 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:26.939 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:26.939 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:26.939 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:16:26.939 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:26.939 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:26.939 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.939 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.939 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.939 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.939 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:26.939 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.939 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.939 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.939 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.939 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.939 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.939 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.939 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.939 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.939 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.939 "name": "Existed_Raid", 00:16:26.939 "uuid": "5facd648-72ec-4812-8d24-693dfe3747d6", 00:16:26.939 "strip_size_kb": 0, 00:16:26.939 "state": "online", 00:16:26.939 "raid_level": "raid1", 00:16:26.939 "superblock": true, 00:16:26.939 "num_base_bdevs": 2, 00:16:26.939 "num_base_bdevs_discovered": 1, 00:16:26.939 "num_base_bdevs_operational": 1, 00:16:26.939 "base_bdevs_list": [ 00:16:26.939 { 00:16:26.939 "name": null, 00:16:26.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.939 "is_configured": false, 00:16:26.939 "data_offset": 0, 00:16:26.939 "data_size": 7936 00:16:26.939 }, 00:16:26.939 { 00:16:26.939 "name": "BaseBdev2", 00:16:26.939 "uuid": "aafdc25b-387b-4f2e-ba17-15895a07c88e", 00:16:26.939 "is_configured": true, 00:16:26.939 "data_offset": 256, 00:16:26.939 "data_size": 7936 00:16:26.939 } 00:16:26.939 ] 00:16:26.939 }' 00:16:26.939 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.939 12:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.206 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:27.206 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:27.206 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.206 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:27.206 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.206 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.206 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.206 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:27.206 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:27.206 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:27.206 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.206 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.476 [2024-11-19 12:36:32.460916] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:27.476 [2024-11-19 12:36:32.461043] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:27.476 [2024-11-19 12:36:32.473496] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.476 [2024-11-19 12:36:32.473550] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:27.476 [2024-11-19 12:36:32.473563] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:16:27.476 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.476 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:27.476 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:27.476 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:27.476 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.476 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.476 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.476 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.476 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:27.477 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:27.477 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:27.477 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 97751 00:16:27.477 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97751 ']' 00:16:27.477 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 97751 00:16:27.477 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:16:27.477 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:27.477 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97751 00:16:27.477 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:27.477 killing process with pid 97751 00:16:27.477 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:27.477 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97751' 00:16:27.477 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 97751 00:16:27.477 [2024-11-19 12:36:32.577724] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:27.477 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 97751 00:16:27.477 [2024-11-19 12:36:32.578787] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:27.736 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:16:27.736 00:16:27.736 real 0m4.050s 00:16:27.736 user 0m6.309s 00:16:27.736 sys 0m0.892s 00:16:27.736 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:27.736 ************************************ 00:16:27.736 END TEST raid_state_function_test_sb_md_separate 00:16:27.736 ************************************ 00:16:27.736 12:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.736 12:36:32 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:16:27.736 12:36:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:27.736 12:36:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:27.736 12:36:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:27.736 ************************************ 00:16:27.736 START TEST raid_superblock_test_md_separate 00:16:27.736 ************************************ 00:16:27.736 12:36:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:16:27.736 12:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:27.736 12:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:27.736 12:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:27.736 12:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:27.736 12:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:27.736 12:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:27.736 12:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:27.736 12:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:27.736 12:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:27.736 12:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:27.736 12:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:27.736 12:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:27.736 12:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:27.736 12:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:27.736 12:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:27.736 12:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=97992 00:16:27.736 12:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:27.736 12:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 97992 00:16:27.736 12:36:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97992 ']' 00:16:27.736 12:36:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.736 12:36:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:27.736 12:36:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.737 12:36:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:27.737 12:36:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.737 [2024-11-19 12:36:32.984930] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:27.737 [2024-11-19 12:36:32.985142] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97992 ] 00:16:27.995 [2024-11-19 12:36:33.145387] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.995 [2024-11-19 12:36:33.197378] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.995 [2024-11-19 12:36:33.238960] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:27.995 [2024-11-19 12:36:33.239078] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.934 malloc1 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.934 [2024-11-19 12:36:33.873531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:28.934 [2024-11-19 12:36:33.873600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.934 [2024-11-19 12:36:33.873623] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:28.934 [2024-11-19 12:36:33.873636] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.934 [2024-11-19 12:36:33.875588] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.934 [2024-11-19 12:36:33.875630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:28.934 pt1 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.934 malloc2 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.934 [2024-11-19 12:36:33.914249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:28.934 [2024-11-19 12:36:33.914378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.934 [2024-11-19 12:36:33.914419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:28.934 [2024-11-19 12:36:33.914459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.934 [2024-11-19 12:36:33.916789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.934 [2024-11-19 12:36:33.916874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:28.934 pt2 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.934 [2024-11-19 12:36:33.926250] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:28.934 [2024-11-19 12:36:33.928158] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:28.934 [2024-11-19 12:36:33.928346] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:28.934 [2024-11-19 12:36:33.928399] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:28.934 [2024-11-19 12:36:33.928523] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:28.934 [2024-11-19 12:36:33.928664] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:28.934 [2024-11-19 12:36:33.928706] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:28.934 [2024-11-19 12:36:33.928852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.934 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.934 "name": "raid_bdev1", 00:16:28.934 "uuid": "e4494679-7214-48a4-8d35-e4c1018d7d86", 00:16:28.934 "strip_size_kb": 0, 00:16:28.934 "state": "online", 00:16:28.934 "raid_level": "raid1", 00:16:28.934 "superblock": true, 00:16:28.935 "num_base_bdevs": 2, 00:16:28.935 "num_base_bdevs_discovered": 2, 00:16:28.935 "num_base_bdevs_operational": 2, 00:16:28.935 "base_bdevs_list": [ 00:16:28.935 { 00:16:28.935 "name": "pt1", 00:16:28.935 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:28.935 "is_configured": true, 00:16:28.935 "data_offset": 256, 00:16:28.935 "data_size": 7936 00:16:28.935 }, 00:16:28.935 { 00:16:28.935 "name": "pt2", 00:16:28.935 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:28.935 "is_configured": true, 00:16:28.935 "data_offset": 256, 00:16:28.935 "data_size": 7936 00:16:28.935 } 00:16:28.935 ] 00:16:28.935 }' 00:16:28.935 12:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.935 12:36:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.194 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:29.194 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:29.194 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:29.194 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:29.194 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:29.194 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:29.194 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:29.194 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:29.194 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.194 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.194 [2024-11-19 12:36:34.413740] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:29.194 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.194 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:29.194 "name": "raid_bdev1", 00:16:29.194 "aliases": [ 00:16:29.194 "e4494679-7214-48a4-8d35-e4c1018d7d86" 00:16:29.194 ], 00:16:29.194 "product_name": "Raid Volume", 00:16:29.194 "block_size": 4096, 00:16:29.194 "num_blocks": 7936, 00:16:29.194 "uuid": "e4494679-7214-48a4-8d35-e4c1018d7d86", 00:16:29.194 "md_size": 32, 00:16:29.194 "md_interleave": false, 00:16:29.194 "dif_type": 0, 00:16:29.194 "assigned_rate_limits": { 00:16:29.194 "rw_ios_per_sec": 0, 00:16:29.194 "rw_mbytes_per_sec": 0, 00:16:29.194 "r_mbytes_per_sec": 0, 00:16:29.194 "w_mbytes_per_sec": 0 00:16:29.194 }, 00:16:29.194 "claimed": false, 00:16:29.194 "zoned": false, 00:16:29.194 "supported_io_types": { 00:16:29.194 "read": true, 00:16:29.194 "write": true, 00:16:29.194 "unmap": false, 00:16:29.194 "flush": false, 00:16:29.194 "reset": true, 00:16:29.194 "nvme_admin": false, 00:16:29.194 "nvme_io": false, 00:16:29.194 "nvme_io_md": false, 00:16:29.194 "write_zeroes": true, 00:16:29.194 "zcopy": false, 00:16:29.194 "get_zone_info": false, 00:16:29.194 "zone_management": false, 00:16:29.194 "zone_append": false, 00:16:29.194 "compare": false, 00:16:29.194 "compare_and_write": false, 00:16:29.194 "abort": false, 00:16:29.194 "seek_hole": false, 00:16:29.194 "seek_data": false, 00:16:29.194 "copy": false, 00:16:29.194 "nvme_iov_md": false 00:16:29.194 }, 00:16:29.194 "memory_domains": [ 00:16:29.194 { 00:16:29.194 "dma_device_id": "system", 00:16:29.194 "dma_device_type": 1 00:16:29.194 }, 00:16:29.194 { 00:16:29.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.194 "dma_device_type": 2 00:16:29.194 }, 00:16:29.194 { 00:16:29.194 "dma_device_id": "system", 00:16:29.194 "dma_device_type": 1 00:16:29.194 }, 00:16:29.194 { 00:16:29.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.194 "dma_device_type": 2 00:16:29.194 } 00:16:29.194 ], 00:16:29.194 "driver_specific": { 00:16:29.194 "raid": { 00:16:29.194 "uuid": "e4494679-7214-48a4-8d35-e4c1018d7d86", 00:16:29.194 "strip_size_kb": 0, 00:16:29.194 "state": "online", 00:16:29.194 "raid_level": "raid1", 00:16:29.194 "superblock": true, 00:16:29.194 "num_base_bdevs": 2, 00:16:29.194 "num_base_bdevs_discovered": 2, 00:16:29.194 "num_base_bdevs_operational": 2, 00:16:29.194 "base_bdevs_list": [ 00:16:29.194 { 00:16:29.194 "name": "pt1", 00:16:29.194 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:29.194 "is_configured": true, 00:16:29.194 "data_offset": 256, 00:16:29.194 "data_size": 7936 00:16:29.194 }, 00:16:29.194 { 00:16:29.194 "name": "pt2", 00:16:29.195 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:29.195 "is_configured": true, 00:16:29.195 "data_offset": 256, 00:16:29.195 "data_size": 7936 00:16:29.195 } 00:16:29.195 ] 00:16:29.195 } 00:16:29.195 } 00:16:29.195 }' 00:16:29.195 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:29.454 pt2' 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:29.454 [2024-11-19 12:36:34.645242] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e4494679-7214-48a4-8d35-e4c1018d7d86 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z e4494679-7214-48a4-8d35-e4c1018d7d86 ']' 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.454 [2024-11-19 12:36:34.696928] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:29.454 [2024-11-19 12:36:34.696960] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:29.454 [2024-11-19 12:36:34.697066] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:29.454 [2024-11-19 12:36:34.697133] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:29.454 [2024-11-19 12:36:34.697146] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.454 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:29.714 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.714 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:29.714 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:29.714 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:29.714 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:29.714 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.714 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.714 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.714 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:29.714 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:29.714 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.714 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.714 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.714 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:29.714 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.714 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.714 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:29.714 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.714 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:29.714 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:29.714 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:16:29.714 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:29.714 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:29.714 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:29.714 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:29.714 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:29.714 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:29.714 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.714 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.714 [2024-11-19 12:36:34.848668] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:29.714 [2024-11-19 12:36:34.850656] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:29.715 [2024-11-19 12:36:34.850789] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:29.715 [2024-11-19 12:36:34.850896] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:29.715 [2024-11-19 12:36:34.850956] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:29.715 [2024-11-19 12:36:34.850989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:16:29.715 request: 00:16:29.715 { 00:16:29.715 "name": "raid_bdev1", 00:16:29.715 "raid_level": "raid1", 00:16:29.715 "base_bdevs": [ 00:16:29.715 "malloc1", 00:16:29.715 "malloc2" 00:16:29.715 ], 00:16:29.715 "superblock": false, 00:16:29.715 "method": "bdev_raid_create", 00:16:29.715 "req_id": 1 00:16:29.715 } 00:16:29.715 Got JSON-RPC error response 00:16:29.715 response: 00:16:29.715 { 00:16:29.715 "code": -17, 00:16:29.715 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:29.715 } 00:16:29.715 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:29.715 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:16:29.715 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:29.715 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:29.715 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:29.715 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.715 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:29.715 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.715 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.715 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.715 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:29.715 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:29.715 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:29.715 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.715 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.715 [2024-11-19 12:36:34.916485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:29.715 [2024-11-19 12:36:34.916621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.715 [2024-11-19 12:36:34.916646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:29.715 [2024-11-19 12:36:34.916655] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.715 [2024-11-19 12:36:34.918693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.715 [2024-11-19 12:36:34.918752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:29.715 [2024-11-19 12:36:34.918815] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:29.715 [2024-11-19 12:36:34.918861] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:29.715 pt1 00:16:29.715 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.715 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:29.715 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.715 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.715 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:29.715 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:29.715 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:29.715 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.715 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.715 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.715 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.715 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.715 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.715 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.715 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.715 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.974 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.974 "name": "raid_bdev1", 00:16:29.974 "uuid": "e4494679-7214-48a4-8d35-e4c1018d7d86", 00:16:29.974 "strip_size_kb": 0, 00:16:29.974 "state": "configuring", 00:16:29.974 "raid_level": "raid1", 00:16:29.974 "superblock": true, 00:16:29.974 "num_base_bdevs": 2, 00:16:29.974 "num_base_bdevs_discovered": 1, 00:16:29.974 "num_base_bdevs_operational": 2, 00:16:29.974 "base_bdevs_list": [ 00:16:29.974 { 00:16:29.974 "name": "pt1", 00:16:29.974 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:29.974 "is_configured": true, 00:16:29.974 "data_offset": 256, 00:16:29.974 "data_size": 7936 00:16:29.974 }, 00:16:29.974 { 00:16:29.974 "name": null, 00:16:29.974 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:29.974 "is_configured": false, 00:16:29.974 "data_offset": 256, 00:16:29.974 "data_size": 7936 00:16:29.974 } 00:16:29.974 ] 00:16:29.974 }' 00:16:29.974 12:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.974 12:36:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.233 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:30.233 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:30.233 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:30.233 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:30.233 12:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.233 12:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.233 [2024-11-19 12:36:35.343817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:30.233 [2024-11-19 12:36:35.343960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.233 [2024-11-19 12:36:35.344004] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:30.233 [2024-11-19 12:36:35.344032] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.233 [2024-11-19 12:36:35.344265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.233 [2024-11-19 12:36:35.344311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:30.233 [2024-11-19 12:36:35.344389] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:30.233 [2024-11-19 12:36:35.344433] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:30.233 [2024-11-19 12:36:35.344538] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:30.233 [2024-11-19 12:36:35.344573] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:30.233 [2024-11-19 12:36:35.344662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:30.233 [2024-11-19 12:36:35.344783] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:30.233 [2024-11-19 12:36:35.344825] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:16:30.233 [2024-11-19 12:36:35.344931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.233 pt2 00:16:30.233 12:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.233 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:30.233 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:30.233 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:30.233 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.233 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.233 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.233 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.233 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:30.233 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.233 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.233 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.233 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.233 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.233 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.233 12:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.233 12:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.233 12:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.233 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.233 "name": "raid_bdev1", 00:16:30.233 "uuid": "e4494679-7214-48a4-8d35-e4c1018d7d86", 00:16:30.233 "strip_size_kb": 0, 00:16:30.233 "state": "online", 00:16:30.233 "raid_level": "raid1", 00:16:30.233 "superblock": true, 00:16:30.233 "num_base_bdevs": 2, 00:16:30.233 "num_base_bdevs_discovered": 2, 00:16:30.233 "num_base_bdevs_operational": 2, 00:16:30.233 "base_bdevs_list": [ 00:16:30.233 { 00:16:30.233 "name": "pt1", 00:16:30.233 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:30.233 "is_configured": true, 00:16:30.233 "data_offset": 256, 00:16:30.233 "data_size": 7936 00:16:30.233 }, 00:16:30.233 { 00:16:30.233 "name": "pt2", 00:16:30.233 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:30.233 "is_configured": true, 00:16:30.233 "data_offset": 256, 00:16:30.233 "data_size": 7936 00:16:30.233 } 00:16:30.233 ] 00:16:30.233 }' 00:16:30.233 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.233 12:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.800 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:30.800 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:30.800 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:30.800 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:30.800 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:30.800 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:30.800 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:30.800 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:30.800 12:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.800 12:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.800 [2024-11-19 12:36:35.811268] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:30.800 12:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.800 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:30.800 "name": "raid_bdev1", 00:16:30.800 "aliases": [ 00:16:30.800 "e4494679-7214-48a4-8d35-e4c1018d7d86" 00:16:30.800 ], 00:16:30.800 "product_name": "Raid Volume", 00:16:30.800 "block_size": 4096, 00:16:30.800 "num_blocks": 7936, 00:16:30.800 "uuid": "e4494679-7214-48a4-8d35-e4c1018d7d86", 00:16:30.800 "md_size": 32, 00:16:30.800 "md_interleave": false, 00:16:30.800 "dif_type": 0, 00:16:30.800 "assigned_rate_limits": { 00:16:30.800 "rw_ios_per_sec": 0, 00:16:30.800 "rw_mbytes_per_sec": 0, 00:16:30.800 "r_mbytes_per_sec": 0, 00:16:30.800 "w_mbytes_per_sec": 0 00:16:30.800 }, 00:16:30.800 "claimed": false, 00:16:30.800 "zoned": false, 00:16:30.800 "supported_io_types": { 00:16:30.800 "read": true, 00:16:30.800 "write": true, 00:16:30.801 "unmap": false, 00:16:30.801 "flush": false, 00:16:30.801 "reset": true, 00:16:30.801 "nvme_admin": false, 00:16:30.801 "nvme_io": false, 00:16:30.801 "nvme_io_md": false, 00:16:30.801 "write_zeroes": true, 00:16:30.801 "zcopy": false, 00:16:30.801 "get_zone_info": false, 00:16:30.801 "zone_management": false, 00:16:30.801 "zone_append": false, 00:16:30.801 "compare": false, 00:16:30.801 "compare_and_write": false, 00:16:30.801 "abort": false, 00:16:30.801 "seek_hole": false, 00:16:30.801 "seek_data": false, 00:16:30.801 "copy": false, 00:16:30.801 "nvme_iov_md": false 00:16:30.801 }, 00:16:30.801 "memory_domains": [ 00:16:30.801 { 00:16:30.801 "dma_device_id": "system", 00:16:30.801 "dma_device_type": 1 00:16:30.801 }, 00:16:30.801 { 00:16:30.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.801 "dma_device_type": 2 00:16:30.801 }, 00:16:30.801 { 00:16:30.801 "dma_device_id": "system", 00:16:30.801 "dma_device_type": 1 00:16:30.801 }, 00:16:30.801 { 00:16:30.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.801 "dma_device_type": 2 00:16:30.801 } 00:16:30.801 ], 00:16:30.801 "driver_specific": { 00:16:30.801 "raid": { 00:16:30.801 "uuid": "e4494679-7214-48a4-8d35-e4c1018d7d86", 00:16:30.801 "strip_size_kb": 0, 00:16:30.801 "state": "online", 00:16:30.801 "raid_level": "raid1", 00:16:30.801 "superblock": true, 00:16:30.801 "num_base_bdevs": 2, 00:16:30.801 "num_base_bdevs_discovered": 2, 00:16:30.801 "num_base_bdevs_operational": 2, 00:16:30.801 "base_bdevs_list": [ 00:16:30.801 { 00:16:30.801 "name": "pt1", 00:16:30.801 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:30.801 "is_configured": true, 00:16:30.801 "data_offset": 256, 00:16:30.801 "data_size": 7936 00:16:30.801 }, 00:16:30.801 { 00:16:30.801 "name": "pt2", 00:16:30.801 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:30.801 "is_configured": true, 00:16:30.801 "data_offset": 256, 00:16:30.801 "data_size": 7936 00:16:30.801 } 00:16:30.801 ] 00:16:30.801 } 00:16:30.801 } 00:16:30.801 }' 00:16:30.801 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:30.801 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:30.801 pt2' 00:16:30.801 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.801 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:30.801 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:30.801 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:30.801 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.801 12:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.801 12:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.801 12:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.801 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:30.801 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:30.801 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:30.801 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.801 12:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:30.801 12:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.801 12:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.801 12:36:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.801 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:30.801 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:30.801 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:30.801 12:36:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.801 12:36:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.801 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:30.801 [2024-11-19 12:36:36.055018] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:31.060 12:36:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.060 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' e4494679-7214-48a4-8d35-e4c1018d7d86 '!=' e4494679-7214-48a4-8d35-e4c1018d7d86 ']' 00:16:31.060 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:31.060 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:31.060 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:16:31.060 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:31.060 12:36:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.060 12:36:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.060 [2024-11-19 12:36:36.110604] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:31.060 12:36:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.060 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:31.060 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.060 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.060 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.060 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.060 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:31.060 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.060 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.060 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.060 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.060 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.060 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.060 12:36:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.060 12:36:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.060 12:36:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.060 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.060 "name": "raid_bdev1", 00:16:31.060 "uuid": "e4494679-7214-48a4-8d35-e4c1018d7d86", 00:16:31.060 "strip_size_kb": 0, 00:16:31.060 "state": "online", 00:16:31.060 "raid_level": "raid1", 00:16:31.060 "superblock": true, 00:16:31.060 "num_base_bdevs": 2, 00:16:31.060 "num_base_bdevs_discovered": 1, 00:16:31.060 "num_base_bdevs_operational": 1, 00:16:31.060 "base_bdevs_list": [ 00:16:31.060 { 00:16:31.060 "name": null, 00:16:31.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.060 "is_configured": false, 00:16:31.060 "data_offset": 0, 00:16:31.060 "data_size": 7936 00:16:31.060 }, 00:16:31.060 { 00:16:31.060 "name": "pt2", 00:16:31.060 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:31.060 "is_configured": true, 00:16:31.060 "data_offset": 256, 00:16:31.060 "data_size": 7936 00:16:31.060 } 00:16:31.060 ] 00:16:31.060 }' 00:16:31.060 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.060 12:36:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.319 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:31.319 12:36:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.319 12:36:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.319 [2024-11-19 12:36:36.533869] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:31.319 [2024-11-19 12:36:36.533978] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:31.319 [2024-11-19 12:36:36.534062] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:31.319 [2024-11-19 12:36:36.534115] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:31.319 [2024-11-19 12:36:36.534125] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:16:31.319 12:36:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.319 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:31.319 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.319 12:36:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.319 12:36:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.319 12:36:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.319 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:31.319 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:31.319 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:31.319 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:31.319 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:31.319 12:36:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.320 12:36:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.578 12:36:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.579 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:31.579 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:31.579 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:31.579 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:31.579 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:16:31.579 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:31.579 12:36:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.579 12:36:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.579 [2024-11-19 12:36:36.593781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:31.579 [2024-11-19 12:36:36.593889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.579 [2024-11-19 12:36:36.593927] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:31.579 [2024-11-19 12:36:36.593955] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.579 [2024-11-19 12:36:36.595968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.579 [2024-11-19 12:36:36.596043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:31.579 [2024-11-19 12:36:36.596128] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:31.579 [2024-11-19 12:36:36.596177] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:31.579 [2024-11-19 12:36:36.596268] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:16:31.579 [2024-11-19 12:36:36.596330] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:31.579 [2024-11-19 12:36:36.596420] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:31.579 [2024-11-19 12:36:36.596532] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:16:31.579 [2024-11-19 12:36:36.596567] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:16:31.579 [2024-11-19 12:36:36.596674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.579 pt2 00:16:31.579 12:36:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.579 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:31.579 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.579 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.579 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.579 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.579 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:31.579 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.579 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.579 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.579 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.579 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.579 12:36:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.579 12:36:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.579 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.579 12:36:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.579 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.579 "name": "raid_bdev1", 00:16:31.579 "uuid": "e4494679-7214-48a4-8d35-e4c1018d7d86", 00:16:31.579 "strip_size_kb": 0, 00:16:31.579 "state": "online", 00:16:31.579 "raid_level": "raid1", 00:16:31.579 "superblock": true, 00:16:31.579 "num_base_bdevs": 2, 00:16:31.579 "num_base_bdevs_discovered": 1, 00:16:31.579 "num_base_bdevs_operational": 1, 00:16:31.579 "base_bdevs_list": [ 00:16:31.579 { 00:16:31.579 "name": null, 00:16:31.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.579 "is_configured": false, 00:16:31.579 "data_offset": 256, 00:16:31.579 "data_size": 7936 00:16:31.579 }, 00:16:31.579 { 00:16:31.579 "name": "pt2", 00:16:31.579 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:31.579 "is_configured": true, 00:16:31.579 "data_offset": 256, 00:16:31.579 "data_size": 7936 00:16:31.579 } 00:16:31.579 ] 00:16:31.579 }' 00:16:31.579 12:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.579 12:36:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.838 12:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:31.838 12:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.838 12:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.838 [2024-11-19 12:36:37.068959] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:31.838 [2024-11-19 12:36:37.068993] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:31.838 [2024-11-19 12:36:37.069075] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:31.838 [2024-11-19 12:36:37.069122] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:31.838 [2024-11-19 12:36:37.069133] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:16:31.838 12:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.838 12:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:31.838 12:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.838 12:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.838 12:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.838 12:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.097 12:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:32.097 12:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:32.097 12:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:32.097 12:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:32.097 12:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.097 12:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.097 [2024-11-19 12:36:37.116855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:32.097 [2024-11-19 12:36:37.116977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.097 [2024-11-19 12:36:37.117016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:32.097 [2024-11-19 12:36:37.117050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.097 [2024-11-19 12:36:37.119134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.097 [2024-11-19 12:36:37.119215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:32.097 [2024-11-19 12:36:37.119296] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:32.097 [2024-11-19 12:36:37.119359] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:32.097 [2024-11-19 12:36:37.119485] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:32.097 [2024-11-19 12:36:37.119547] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:32.097 [2024-11-19 12:36:37.119611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:16:32.097 [2024-11-19 12:36:37.119685] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:32.097 [2024-11-19 12:36:37.119781] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:32.097 [2024-11-19 12:36:37.119825] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:32.097 [2024-11-19 12:36:37.119930] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:32.097 [2024-11-19 12:36:37.120043] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:32.098 [2024-11-19 12:36:37.120077] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:32.098 [2024-11-19 12:36:37.120203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.098 pt1 00:16:32.098 12:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.098 12:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:32.098 12:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:32.098 12:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.098 12:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.098 12:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.098 12:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.098 12:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:32.098 12:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.098 12:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.098 12:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.098 12:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.098 12:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.098 12:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.098 12:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.098 12:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.098 12:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.098 12:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.098 "name": "raid_bdev1", 00:16:32.098 "uuid": "e4494679-7214-48a4-8d35-e4c1018d7d86", 00:16:32.098 "strip_size_kb": 0, 00:16:32.098 "state": "online", 00:16:32.098 "raid_level": "raid1", 00:16:32.098 "superblock": true, 00:16:32.098 "num_base_bdevs": 2, 00:16:32.098 "num_base_bdevs_discovered": 1, 00:16:32.098 "num_base_bdevs_operational": 1, 00:16:32.098 "base_bdevs_list": [ 00:16:32.098 { 00:16:32.098 "name": null, 00:16:32.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.098 "is_configured": false, 00:16:32.098 "data_offset": 256, 00:16:32.098 "data_size": 7936 00:16:32.098 }, 00:16:32.098 { 00:16:32.098 "name": "pt2", 00:16:32.098 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:32.098 "is_configured": true, 00:16:32.098 "data_offset": 256, 00:16:32.098 "data_size": 7936 00:16:32.098 } 00:16:32.098 ] 00:16:32.098 }' 00:16:32.098 12:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.098 12:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.357 12:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:32.357 12:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:32.357 12:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.357 12:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.357 12:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.617 12:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:32.617 12:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:32.617 12:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.617 12:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.617 12:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:32.617 [2024-11-19 12:36:37.640250] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:32.617 12:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.617 12:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' e4494679-7214-48a4-8d35-e4c1018d7d86 '!=' e4494679-7214-48a4-8d35-e4c1018d7d86 ']' 00:16:32.617 12:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 97992 00:16:32.617 12:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97992 ']' 00:16:32.617 12:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 97992 00:16:32.617 12:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:16:32.617 12:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:32.617 12:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97992 00:16:32.617 12:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:32.617 12:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:32.617 killing process with pid 97992 00:16:32.617 12:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97992' 00:16:32.617 12:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 97992 00:16:32.617 [2024-11-19 12:36:37.716488] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:32.617 [2024-11-19 12:36:37.716599] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.617 12:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 97992 00:16:32.617 [2024-11-19 12:36:37.716651] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.617 [2024-11-19 12:36:37.716660] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:32.617 [2024-11-19 12:36:37.740942] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:32.876 12:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:16:32.876 00:16:32.876 real 0m5.088s 00:16:32.876 user 0m8.235s 00:16:32.876 sys 0m1.157s 00:16:32.876 ************************************ 00:16:32.876 END TEST raid_superblock_test_md_separate 00:16:32.876 ************************************ 00:16:32.876 12:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:32.877 12:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.877 12:36:38 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:16:32.877 12:36:38 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:16:32.877 12:36:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:32.877 12:36:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:32.877 12:36:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:32.877 ************************************ 00:16:32.877 START TEST raid_rebuild_test_sb_md_separate 00:16:32.877 ************************************ 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=98305 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 98305 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 98305 ']' 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:32.877 12:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.136 [2024-11-19 12:36:38.174573] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:33.136 [2024-11-19 12:36:38.174905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:33.136 Zero copy mechanism will not be used. 00:16:33.136 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98305 ] 00:16:33.136 [2024-11-19 12:36:38.340185] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.136 [2024-11-19 12:36:38.392597] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.395 [2024-11-19 12:36:38.434666] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:33.395 [2024-11-19 12:36:38.434807] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.964 BaseBdev1_malloc 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.964 [2024-11-19 12:36:39.057420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:33.964 [2024-11-19 12:36:39.057541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.964 [2024-11-19 12:36:39.057586] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:33.964 [2024-11-19 12:36:39.057616] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.964 [2024-11-19 12:36:39.059637] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.964 [2024-11-19 12:36:39.059722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:33.964 BaseBdev1 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.964 BaseBdev2_malloc 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.964 [2024-11-19 12:36:39.095938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:33.964 [2024-11-19 12:36:39.096009] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.964 [2024-11-19 12:36:39.096034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:33.964 [2024-11-19 12:36:39.096044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.964 [2024-11-19 12:36:39.098240] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.964 [2024-11-19 12:36:39.098355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:33.964 BaseBdev2 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.964 spare_malloc 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.964 spare_delay 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.964 [2024-11-19 12:36:39.137187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:33.964 [2024-11-19 12:36:39.137324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.964 [2024-11-19 12:36:39.137374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:33.964 [2024-11-19 12:36:39.137409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.964 [2024-11-19 12:36:39.139351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.964 [2024-11-19 12:36:39.139433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:33.964 spare 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.964 [2024-11-19 12:36:39.149193] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:33.964 [2024-11-19 12:36:39.151032] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:33.964 [2024-11-19 12:36:39.151199] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:33.964 [2024-11-19 12:36:39.151217] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:33.964 [2024-11-19 12:36:39.151313] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:33.964 [2024-11-19 12:36:39.151420] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:33.964 [2024-11-19 12:36:39.151431] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:33.964 [2024-11-19 12:36:39.151525] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.964 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.964 "name": "raid_bdev1", 00:16:33.964 "uuid": "95942ec5-313e-40f7-b241-0482704f0d06", 00:16:33.964 "strip_size_kb": 0, 00:16:33.964 "state": "online", 00:16:33.964 "raid_level": "raid1", 00:16:33.964 "superblock": true, 00:16:33.964 "num_base_bdevs": 2, 00:16:33.964 "num_base_bdevs_discovered": 2, 00:16:33.964 "num_base_bdevs_operational": 2, 00:16:33.964 "base_bdevs_list": [ 00:16:33.964 { 00:16:33.964 "name": "BaseBdev1", 00:16:33.964 "uuid": "9f9d4748-0b74-5952-9115-0e680c8ef0cf", 00:16:33.964 "is_configured": true, 00:16:33.964 "data_offset": 256, 00:16:33.964 "data_size": 7936 00:16:33.965 }, 00:16:33.965 { 00:16:33.965 "name": "BaseBdev2", 00:16:33.965 "uuid": "8f62a9e9-9462-52c0-a966-af0e9a301eb7", 00:16:33.965 "is_configured": true, 00:16:33.965 "data_offset": 256, 00:16:33.965 "data_size": 7936 00:16:33.965 } 00:16:33.965 ] 00:16:33.965 }' 00:16:33.965 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.965 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.532 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:34.532 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.532 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.532 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:34.532 [2024-11-19 12:36:39.564817] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:34.532 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.532 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:34.532 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.532 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.532 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.532 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:34.532 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.532 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:34.532 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:34.532 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:34.532 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:34.532 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:34.532 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:34.532 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:34.532 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:34.532 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:34.532 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:34.532 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:16:34.532 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:34.532 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:34.532 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:34.792 [2024-11-19 12:36:39.852050] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:34.792 /dev/nbd0 00:16:34.792 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:34.792 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:34.792 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:34.792 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:16:34.792 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:34.792 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:34.792 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:34.792 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:16:34.792 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:34.792 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:34.792 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:34.792 1+0 records in 00:16:34.792 1+0 records out 00:16:34.792 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580445 s, 7.1 MB/s 00:16:34.792 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.792 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:16:34.792 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.792 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:34.792 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:16:34.792 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:34.792 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:34.792 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:34.792 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:34.792 12:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:16:35.358 7936+0 records in 00:16:35.358 7936+0 records out 00:16:35.358 32505856 bytes (33 MB, 31 MiB) copied, 0.566518 s, 57.4 MB/s 00:16:35.358 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:35.358 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:35.358 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:35.358 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:35.358 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:16:35.358 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:35.358 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:35.617 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:35.618 [2024-11-19 12:36:40.729013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.618 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:35.618 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:35.618 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:35.618 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:35.618 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:35.618 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:35.618 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:35.618 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:35.618 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.618 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.618 [2024-11-19 12:36:40.753096] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:35.618 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.618 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:35.618 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.618 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.618 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.618 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.618 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:35.618 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.618 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.618 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.618 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.618 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.618 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.618 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.618 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.618 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.618 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.618 "name": "raid_bdev1", 00:16:35.618 "uuid": "95942ec5-313e-40f7-b241-0482704f0d06", 00:16:35.618 "strip_size_kb": 0, 00:16:35.618 "state": "online", 00:16:35.618 "raid_level": "raid1", 00:16:35.618 "superblock": true, 00:16:35.618 "num_base_bdevs": 2, 00:16:35.618 "num_base_bdevs_discovered": 1, 00:16:35.618 "num_base_bdevs_operational": 1, 00:16:35.618 "base_bdevs_list": [ 00:16:35.618 { 00:16:35.618 "name": null, 00:16:35.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.618 "is_configured": false, 00:16:35.618 "data_offset": 0, 00:16:35.618 "data_size": 7936 00:16:35.618 }, 00:16:35.618 { 00:16:35.618 "name": "BaseBdev2", 00:16:35.618 "uuid": "8f62a9e9-9462-52c0-a966-af0e9a301eb7", 00:16:35.618 "is_configured": true, 00:16:35.618 "data_offset": 256, 00:16:35.618 "data_size": 7936 00:16:35.618 } 00:16:35.618 ] 00:16:35.618 }' 00:16:35.618 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.618 12:36:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.187 12:36:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:36.187 12:36:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.187 12:36:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.187 [2024-11-19 12:36:41.220342] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:36.187 [2024-11-19 12:36:41.222188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:16:36.187 [2024-11-19 12:36:41.224238] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:36.187 12:36:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.187 12:36:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:37.124 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.124 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.124 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.124 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.124 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.124 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.124 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.124 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.124 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.124 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.124 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.124 "name": "raid_bdev1", 00:16:37.124 "uuid": "95942ec5-313e-40f7-b241-0482704f0d06", 00:16:37.124 "strip_size_kb": 0, 00:16:37.124 "state": "online", 00:16:37.124 "raid_level": "raid1", 00:16:37.124 "superblock": true, 00:16:37.124 "num_base_bdevs": 2, 00:16:37.124 "num_base_bdevs_discovered": 2, 00:16:37.124 "num_base_bdevs_operational": 2, 00:16:37.124 "process": { 00:16:37.124 "type": "rebuild", 00:16:37.124 "target": "spare", 00:16:37.124 "progress": { 00:16:37.124 "blocks": 2560, 00:16:37.124 "percent": 32 00:16:37.124 } 00:16:37.124 }, 00:16:37.124 "base_bdevs_list": [ 00:16:37.124 { 00:16:37.125 "name": "spare", 00:16:37.125 "uuid": "1fcd2b29-54f5-5cbd-bf65-eb7bc12f2d43", 00:16:37.125 "is_configured": true, 00:16:37.125 "data_offset": 256, 00:16:37.125 "data_size": 7936 00:16:37.125 }, 00:16:37.125 { 00:16:37.125 "name": "BaseBdev2", 00:16:37.125 "uuid": "8f62a9e9-9462-52c0-a966-af0e9a301eb7", 00:16:37.125 "is_configured": true, 00:16:37.125 "data_offset": 256, 00:16:37.125 "data_size": 7936 00:16:37.125 } 00:16:37.125 ] 00:16:37.125 }' 00:16:37.125 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.125 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.125 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.384 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.384 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:37.384 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.384 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.384 [2024-11-19 12:36:42.391202] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:37.384 [2024-11-19 12:36:42.430172] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:37.384 [2024-11-19 12:36:42.430253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.384 [2024-11-19 12:36:42.430274] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:37.384 [2024-11-19 12:36:42.430281] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:37.384 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.384 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:37.384 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.384 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.384 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:37.384 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:37.384 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:37.384 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.384 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.384 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.384 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.384 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.384 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.384 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.384 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.384 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.384 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.384 "name": "raid_bdev1", 00:16:37.384 "uuid": "95942ec5-313e-40f7-b241-0482704f0d06", 00:16:37.384 "strip_size_kb": 0, 00:16:37.384 "state": "online", 00:16:37.384 "raid_level": "raid1", 00:16:37.384 "superblock": true, 00:16:37.384 "num_base_bdevs": 2, 00:16:37.384 "num_base_bdevs_discovered": 1, 00:16:37.384 "num_base_bdevs_operational": 1, 00:16:37.384 "base_bdevs_list": [ 00:16:37.384 { 00:16:37.384 "name": null, 00:16:37.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.384 "is_configured": false, 00:16:37.384 "data_offset": 0, 00:16:37.384 "data_size": 7936 00:16:37.384 }, 00:16:37.384 { 00:16:37.384 "name": "BaseBdev2", 00:16:37.384 "uuid": "8f62a9e9-9462-52c0-a966-af0e9a301eb7", 00:16:37.384 "is_configured": true, 00:16:37.384 "data_offset": 256, 00:16:37.384 "data_size": 7936 00:16:37.384 } 00:16:37.384 ] 00:16:37.384 }' 00:16:37.384 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.385 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.953 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:37.953 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.953 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:37.953 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:37.953 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.953 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.953 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.953 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.953 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.953 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.953 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.953 "name": "raid_bdev1", 00:16:37.953 "uuid": "95942ec5-313e-40f7-b241-0482704f0d06", 00:16:37.953 "strip_size_kb": 0, 00:16:37.953 "state": "online", 00:16:37.953 "raid_level": "raid1", 00:16:37.953 "superblock": true, 00:16:37.953 "num_base_bdevs": 2, 00:16:37.953 "num_base_bdevs_discovered": 1, 00:16:37.953 "num_base_bdevs_operational": 1, 00:16:37.953 "base_bdevs_list": [ 00:16:37.953 { 00:16:37.953 "name": null, 00:16:37.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.953 "is_configured": false, 00:16:37.953 "data_offset": 0, 00:16:37.953 "data_size": 7936 00:16:37.953 }, 00:16:37.953 { 00:16:37.953 "name": "BaseBdev2", 00:16:37.953 "uuid": "8f62a9e9-9462-52c0-a966-af0e9a301eb7", 00:16:37.953 "is_configured": true, 00:16:37.953 "data_offset": 256, 00:16:37.953 "data_size": 7936 00:16:37.953 } 00:16:37.953 ] 00:16:37.953 }' 00:16:37.953 12:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.953 12:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:37.953 12:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.953 12:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:37.953 12:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:37.953 12:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.954 12:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.954 [2024-11-19 12:36:43.056444] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:37.954 [2024-11-19 12:36:43.058201] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:16:37.954 [2024-11-19 12:36:43.060111] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:37.954 12:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.954 12:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:38.891 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.891 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.891 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.891 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.891 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.891 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.891 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.891 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.891 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:38.891 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.891 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.891 "name": "raid_bdev1", 00:16:38.891 "uuid": "95942ec5-313e-40f7-b241-0482704f0d06", 00:16:38.891 "strip_size_kb": 0, 00:16:38.891 "state": "online", 00:16:38.891 "raid_level": "raid1", 00:16:38.891 "superblock": true, 00:16:38.891 "num_base_bdevs": 2, 00:16:38.891 "num_base_bdevs_discovered": 2, 00:16:38.891 "num_base_bdevs_operational": 2, 00:16:38.891 "process": { 00:16:38.891 "type": "rebuild", 00:16:38.891 "target": "spare", 00:16:38.891 "progress": { 00:16:38.891 "blocks": 2560, 00:16:38.891 "percent": 32 00:16:38.891 } 00:16:38.891 }, 00:16:38.891 "base_bdevs_list": [ 00:16:38.891 { 00:16:38.891 "name": "spare", 00:16:38.891 "uuid": "1fcd2b29-54f5-5cbd-bf65-eb7bc12f2d43", 00:16:38.891 "is_configured": true, 00:16:38.891 "data_offset": 256, 00:16:38.891 "data_size": 7936 00:16:38.891 }, 00:16:38.891 { 00:16:38.891 "name": "BaseBdev2", 00:16:38.891 "uuid": "8f62a9e9-9462-52c0-a966-af0e9a301eb7", 00:16:38.891 "is_configured": true, 00:16:38.891 "data_offset": 256, 00:16:38.891 "data_size": 7936 00:16:38.891 } 00:16:38.891 ] 00:16:38.891 }' 00:16:38.891 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.154 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:39.154 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.154 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.154 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:39.154 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:39.154 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:39.154 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:39.154 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:39.154 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:39.154 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=597 00:16:39.154 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:39.154 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.154 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.154 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.154 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.154 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.154 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.154 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.154 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.154 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.154 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.154 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.154 "name": "raid_bdev1", 00:16:39.154 "uuid": "95942ec5-313e-40f7-b241-0482704f0d06", 00:16:39.154 "strip_size_kb": 0, 00:16:39.154 "state": "online", 00:16:39.154 "raid_level": "raid1", 00:16:39.154 "superblock": true, 00:16:39.154 "num_base_bdevs": 2, 00:16:39.154 "num_base_bdevs_discovered": 2, 00:16:39.154 "num_base_bdevs_operational": 2, 00:16:39.154 "process": { 00:16:39.154 "type": "rebuild", 00:16:39.154 "target": "spare", 00:16:39.154 "progress": { 00:16:39.154 "blocks": 2816, 00:16:39.154 "percent": 35 00:16:39.154 } 00:16:39.154 }, 00:16:39.154 "base_bdevs_list": [ 00:16:39.154 { 00:16:39.154 "name": "spare", 00:16:39.154 "uuid": "1fcd2b29-54f5-5cbd-bf65-eb7bc12f2d43", 00:16:39.154 "is_configured": true, 00:16:39.154 "data_offset": 256, 00:16:39.154 "data_size": 7936 00:16:39.154 }, 00:16:39.154 { 00:16:39.154 "name": "BaseBdev2", 00:16:39.154 "uuid": "8f62a9e9-9462-52c0-a966-af0e9a301eb7", 00:16:39.154 "is_configured": true, 00:16:39.154 "data_offset": 256, 00:16:39.154 "data_size": 7936 00:16:39.154 } 00:16:39.154 ] 00:16:39.154 }' 00:16:39.154 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.154 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:39.154 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.154 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.154 12:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:40.130 12:36:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:40.130 12:36:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.130 12:36:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.130 12:36:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.130 12:36:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.130 12:36:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.130 12:36:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.130 12:36:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.130 12:36:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.130 12:36:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:40.389 12:36:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.389 12:36:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.389 "name": "raid_bdev1", 00:16:40.389 "uuid": "95942ec5-313e-40f7-b241-0482704f0d06", 00:16:40.389 "strip_size_kb": 0, 00:16:40.389 "state": "online", 00:16:40.389 "raid_level": "raid1", 00:16:40.389 "superblock": true, 00:16:40.389 "num_base_bdevs": 2, 00:16:40.389 "num_base_bdevs_discovered": 2, 00:16:40.389 "num_base_bdevs_operational": 2, 00:16:40.389 "process": { 00:16:40.389 "type": "rebuild", 00:16:40.389 "target": "spare", 00:16:40.389 "progress": { 00:16:40.389 "blocks": 5888, 00:16:40.389 "percent": 74 00:16:40.389 } 00:16:40.389 }, 00:16:40.389 "base_bdevs_list": [ 00:16:40.389 { 00:16:40.389 "name": "spare", 00:16:40.389 "uuid": "1fcd2b29-54f5-5cbd-bf65-eb7bc12f2d43", 00:16:40.389 "is_configured": true, 00:16:40.389 "data_offset": 256, 00:16:40.389 "data_size": 7936 00:16:40.389 }, 00:16:40.389 { 00:16:40.389 "name": "BaseBdev2", 00:16:40.389 "uuid": "8f62a9e9-9462-52c0-a966-af0e9a301eb7", 00:16:40.389 "is_configured": true, 00:16:40.389 "data_offset": 256, 00:16:40.389 "data_size": 7936 00:16:40.389 } 00:16:40.389 ] 00:16:40.389 }' 00:16:40.389 12:36:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.389 12:36:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:40.389 12:36:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.389 12:36:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:40.389 12:36:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:40.958 [2024-11-19 12:36:46.174185] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:40.958 [2024-11-19 12:36:46.174323] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:40.958 [2024-11-19 12:36:46.174455] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.528 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:41.528 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:41.528 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.528 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:41.528 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:41.528 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.528 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.528 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.528 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.528 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:41.528 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.528 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.528 "name": "raid_bdev1", 00:16:41.528 "uuid": "95942ec5-313e-40f7-b241-0482704f0d06", 00:16:41.528 "strip_size_kb": 0, 00:16:41.528 "state": "online", 00:16:41.528 "raid_level": "raid1", 00:16:41.528 "superblock": true, 00:16:41.528 "num_base_bdevs": 2, 00:16:41.528 "num_base_bdevs_discovered": 2, 00:16:41.528 "num_base_bdevs_operational": 2, 00:16:41.528 "base_bdevs_list": [ 00:16:41.528 { 00:16:41.528 "name": "spare", 00:16:41.528 "uuid": "1fcd2b29-54f5-5cbd-bf65-eb7bc12f2d43", 00:16:41.528 "is_configured": true, 00:16:41.528 "data_offset": 256, 00:16:41.528 "data_size": 7936 00:16:41.528 }, 00:16:41.528 { 00:16:41.528 "name": "BaseBdev2", 00:16:41.528 "uuid": "8f62a9e9-9462-52c0-a966-af0e9a301eb7", 00:16:41.528 "is_configured": true, 00:16:41.528 "data_offset": 256, 00:16:41.528 "data_size": 7936 00:16:41.528 } 00:16:41.528 ] 00:16:41.528 }' 00:16:41.528 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.528 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:41.528 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.528 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:41.528 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:16:41.528 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:41.528 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.528 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:41.528 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:41.528 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.528 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.528 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.528 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:41.528 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.528 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.528 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.528 "name": "raid_bdev1", 00:16:41.528 "uuid": "95942ec5-313e-40f7-b241-0482704f0d06", 00:16:41.528 "strip_size_kb": 0, 00:16:41.528 "state": "online", 00:16:41.528 "raid_level": "raid1", 00:16:41.528 "superblock": true, 00:16:41.528 "num_base_bdevs": 2, 00:16:41.528 "num_base_bdevs_discovered": 2, 00:16:41.528 "num_base_bdevs_operational": 2, 00:16:41.528 "base_bdevs_list": [ 00:16:41.528 { 00:16:41.528 "name": "spare", 00:16:41.528 "uuid": "1fcd2b29-54f5-5cbd-bf65-eb7bc12f2d43", 00:16:41.528 "is_configured": true, 00:16:41.528 "data_offset": 256, 00:16:41.528 "data_size": 7936 00:16:41.528 }, 00:16:41.528 { 00:16:41.528 "name": "BaseBdev2", 00:16:41.528 "uuid": "8f62a9e9-9462-52c0-a966-af0e9a301eb7", 00:16:41.528 "is_configured": true, 00:16:41.528 "data_offset": 256, 00:16:41.528 "data_size": 7936 00:16:41.528 } 00:16:41.528 ] 00:16:41.528 }' 00:16:41.528 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.528 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:41.528 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.788 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:41.788 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:41.788 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.788 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.788 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:41.788 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:41.788 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:41.788 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.788 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.788 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.788 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.788 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.788 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.788 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:41.788 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.788 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.788 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.788 "name": "raid_bdev1", 00:16:41.788 "uuid": "95942ec5-313e-40f7-b241-0482704f0d06", 00:16:41.788 "strip_size_kb": 0, 00:16:41.788 "state": "online", 00:16:41.788 "raid_level": "raid1", 00:16:41.788 "superblock": true, 00:16:41.788 "num_base_bdevs": 2, 00:16:41.788 "num_base_bdevs_discovered": 2, 00:16:41.788 "num_base_bdevs_operational": 2, 00:16:41.788 "base_bdevs_list": [ 00:16:41.788 { 00:16:41.788 "name": "spare", 00:16:41.788 "uuid": "1fcd2b29-54f5-5cbd-bf65-eb7bc12f2d43", 00:16:41.788 "is_configured": true, 00:16:41.788 "data_offset": 256, 00:16:41.788 "data_size": 7936 00:16:41.788 }, 00:16:41.788 { 00:16:41.788 "name": "BaseBdev2", 00:16:41.788 "uuid": "8f62a9e9-9462-52c0-a966-af0e9a301eb7", 00:16:41.788 "is_configured": true, 00:16:41.788 "data_offset": 256, 00:16:41.788 "data_size": 7936 00:16:41.788 } 00:16:41.788 ] 00:16:41.788 }' 00:16:41.788 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.788 12:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:42.047 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:42.048 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.048 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:42.048 [2024-11-19 12:36:47.263724] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:42.048 [2024-11-19 12:36:47.263771] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:42.048 [2024-11-19 12:36:47.263874] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:42.048 [2024-11-19 12:36:47.263977] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:42.048 [2024-11-19 12:36:47.264004] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:42.048 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.048 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.048 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.048 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:16:42.048 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:42.048 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.307 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:42.307 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:42.307 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:42.307 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:42.307 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:42.307 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:42.307 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:42.307 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:42.307 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:42.307 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:16:42.307 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:42.307 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:42.307 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:42.307 /dev/nbd0 00:16:42.566 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:42.566 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:42.566 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:42.566 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:16:42.566 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:42.566 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:42.566 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:42.566 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:16:42.566 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:42.566 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:42.566 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:42.566 1+0 records in 00:16:42.566 1+0 records out 00:16:42.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405342 s, 10.1 MB/s 00:16:42.566 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:42.566 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:16:42.566 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:42.566 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:42.566 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:16:42.566 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:42.566 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:42.566 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:42.566 /dev/nbd1 00:16:42.825 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:42.825 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:42.825 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:42.825 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:16:42.825 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:42.825 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:42.825 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:42.825 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:16:42.825 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:42.825 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:42.825 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:42.825 1+0 records in 00:16:42.825 1+0 records out 00:16:42.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000462508 s, 8.9 MB/s 00:16:42.825 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:42.825 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:16:42.825 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:42.825 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:42.825 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:16:42.825 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:42.825 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:42.825 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:42.825 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:42.825 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:42.825 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:42.825 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:42.825 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:16:42.825 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:42.825 12:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:43.084 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:43.084 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:43.084 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:43.084 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:43.084 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:43.084 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:43.084 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:43.084 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:43.084 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:43.084 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:43.343 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:43.343 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:43.344 [2024-11-19 12:36:48.436077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:43.344 [2024-11-19 12:36:48.436153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.344 [2024-11-19 12:36:48.436174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:43.344 [2024-11-19 12:36:48.436187] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.344 [2024-11-19 12:36:48.438119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.344 [2024-11-19 12:36:48.438159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:43.344 [2024-11-19 12:36:48.438230] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:43.344 [2024-11-19 12:36:48.438286] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:43.344 [2024-11-19 12:36:48.438415] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:43.344 spare 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:43.344 [2024-11-19 12:36:48.538326] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:16:43.344 [2024-11-19 12:36:48.538370] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:43.344 [2024-11-19 12:36:48.538558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:16:43.344 [2024-11-19 12:36:48.538727] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:16:43.344 [2024-11-19 12:36:48.538759] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:16:43.344 [2024-11-19 12:36:48.538898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.344 "name": "raid_bdev1", 00:16:43.344 "uuid": "95942ec5-313e-40f7-b241-0482704f0d06", 00:16:43.344 "strip_size_kb": 0, 00:16:43.344 "state": "online", 00:16:43.344 "raid_level": "raid1", 00:16:43.344 "superblock": true, 00:16:43.344 "num_base_bdevs": 2, 00:16:43.344 "num_base_bdevs_discovered": 2, 00:16:43.344 "num_base_bdevs_operational": 2, 00:16:43.344 "base_bdevs_list": [ 00:16:43.344 { 00:16:43.344 "name": "spare", 00:16:43.344 "uuid": "1fcd2b29-54f5-5cbd-bf65-eb7bc12f2d43", 00:16:43.344 "is_configured": true, 00:16:43.344 "data_offset": 256, 00:16:43.344 "data_size": 7936 00:16:43.344 }, 00:16:43.344 { 00:16:43.344 "name": "BaseBdev2", 00:16:43.344 "uuid": "8f62a9e9-9462-52c0-a966-af0e9a301eb7", 00:16:43.344 "is_configured": true, 00:16:43.344 "data_offset": 256, 00:16:43.344 "data_size": 7936 00:16:43.344 } 00:16:43.344 ] 00:16:43.344 }' 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.344 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:43.913 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:43.913 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.913 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:43.913 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:43.913 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.913 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.913 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.913 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.913 12:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:43.913 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.913 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.913 "name": "raid_bdev1", 00:16:43.913 "uuid": "95942ec5-313e-40f7-b241-0482704f0d06", 00:16:43.913 "strip_size_kb": 0, 00:16:43.913 "state": "online", 00:16:43.913 "raid_level": "raid1", 00:16:43.913 "superblock": true, 00:16:43.913 "num_base_bdevs": 2, 00:16:43.913 "num_base_bdevs_discovered": 2, 00:16:43.913 "num_base_bdevs_operational": 2, 00:16:43.913 "base_bdevs_list": [ 00:16:43.913 { 00:16:43.913 "name": "spare", 00:16:43.913 "uuid": "1fcd2b29-54f5-5cbd-bf65-eb7bc12f2d43", 00:16:43.913 "is_configured": true, 00:16:43.913 "data_offset": 256, 00:16:43.913 "data_size": 7936 00:16:43.913 }, 00:16:43.913 { 00:16:43.913 "name": "BaseBdev2", 00:16:43.913 "uuid": "8f62a9e9-9462-52c0-a966-af0e9a301eb7", 00:16:43.913 "is_configured": true, 00:16:43.913 "data_offset": 256, 00:16:43.913 "data_size": 7936 00:16:43.913 } 00:16:43.913 ] 00:16:43.913 }' 00:16:43.913 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.913 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:43.913 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.913 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:43.913 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:43.913 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.913 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.913 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:43.913 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.913 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:43.913 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:43.913 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.913 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:43.913 [2024-11-19 12:36:49.162910] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:43.914 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.914 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:43.914 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.914 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.914 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:43.914 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:43.914 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:43.914 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.914 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.914 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.914 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.172 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.172 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.172 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:44.172 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.172 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.172 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.172 "name": "raid_bdev1", 00:16:44.172 "uuid": "95942ec5-313e-40f7-b241-0482704f0d06", 00:16:44.172 "strip_size_kb": 0, 00:16:44.172 "state": "online", 00:16:44.172 "raid_level": "raid1", 00:16:44.172 "superblock": true, 00:16:44.172 "num_base_bdevs": 2, 00:16:44.172 "num_base_bdevs_discovered": 1, 00:16:44.172 "num_base_bdevs_operational": 1, 00:16:44.172 "base_bdevs_list": [ 00:16:44.172 { 00:16:44.172 "name": null, 00:16:44.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.172 "is_configured": false, 00:16:44.172 "data_offset": 0, 00:16:44.172 "data_size": 7936 00:16:44.172 }, 00:16:44.172 { 00:16:44.172 "name": "BaseBdev2", 00:16:44.172 "uuid": "8f62a9e9-9462-52c0-a966-af0e9a301eb7", 00:16:44.172 "is_configured": true, 00:16:44.172 "data_offset": 256, 00:16:44.172 "data_size": 7936 00:16:44.172 } 00:16:44.172 ] 00:16:44.172 }' 00:16:44.172 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.172 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:44.431 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:44.431 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.431 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:44.431 [2024-11-19 12:36:49.622157] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:44.431 [2024-11-19 12:36:49.622369] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:44.431 [2024-11-19 12:36:49.622400] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:44.431 [2024-11-19 12:36:49.622445] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:44.431 [2024-11-19 12:36:49.624122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:16:44.431 [2024-11-19 12:36:49.626081] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:44.431 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.431 12:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.810 "name": "raid_bdev1", 00:16:45.810 "uuid": "95942ec5-313e-40f7-b241-0482704f0d06", 00:16:45.810 "strip_size_kb": 0, 00:16:45.810 "state": "online", 00:16:45.810 "raid_level": "raid1", 00:16:45.810 "superblock": true, 00:16:45.810 "num_base_bdevs": 2, 00:16:45.810 "num_base_bdevs_discovered": 2, 00:16:45.810 "num_base_bdevs_operational": 2, 00:16:45.810 "process": { 00:16:45.810 "type": "rebuild", 00:16:45.810 "target": "spare", 00:16:45.810 "progress": { 00:16:45.810 "blocks": 2560, 00:16:45.810 "percent": 32 00:16:45.810 } 00:16:45.810 }, 00:16:45.810 "base_bdevs_list": [ 00:16:45.810 { 00:16:45.810 "name": "spare", 00:16:45.810 "uuid": "1fcd2b29-54f5-5cbd-bf65-eb7bc12f2d43", 00:16:45.810 "is_configured": true, 00:16:45.810 "data_offset": 256, 00:16:45.810 "data_size": 7936 00:16:45.810 }, 00:16:45.810 { 00:16:45.810 "name": "BaseBdev2", 00:16:45.810 "uuid": "8f62a9e9-9462-52c0-a966-af0e9a301eb7", 00:16:45.810 "is_configured": true, 00:16:45.810 "data_offset": 256, 00:16:45.810 "data_size": 7936 00:16:45.810 } 00:16:45.810 ] 00:16:45.810 }' 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:45.810 [2024-11-19 12:36:50.773270] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:45.810 [2024-11-19 12:36:50.831376] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:45.810 [2024-11-19 12:36:50.831462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.810 [2024-11-19 12:36:50.831481] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:45.810 [2024-11-19 12:36:50.831488] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.810 "name": "raid_bdev1", 00:16:45.810 "uuid": "95942ec5-313e-40f7-b241-0482704f0d06", 00:16:45.810 "strip_size_kb": 0, 00:16:45.810 "state": "online", 00:16:45.810 "raid_level": "raid1", 00:16:45.810 "superblock": true, 00:16:45.810 "num_base_bdevs": 2, 00:16:45.810 "num_base_bdevs_discovered": 1, 00:16:45.810 "num_base_bdevs_operational": 1, 00:16:45.810 "base_bdevs_list": [ 00:16:45.810 { 00:16:45.810 "name": null, 00:16:45.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.810 "is_configured": false, 00:16:45.810 "data_offset": 0, 00:16:45.810 "data_size": 7936 00:16:45.810 }, 00:16:45.810 { 00:16:45.810 "name": "BaseBdev2", 00:16:45.810 "uuid": "8f62a9e9-9462-52c0-a966-af0e9a301eb7", 00:16:45.810 "is_configured": true, 00:16:45.810 "data_offset": 256, 00:16:45.810 "data_size": 7936 00:16:45.810 } 00:16:45.810 ] 00:16:45.810 }' 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.810 12:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:46.070 12:36:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:46.070 12:36:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.070 12:36:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:46.070 [2024-11-19 12:36:51.273793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:46.070 [2024-11-19 12:36:51.273880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.070 [2024-11-19 12:36:51.273904] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:46.070 [2024-11-19 12:36:51.273914] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.070 [2024-11-19 12:36:51.274159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.070 [2024-11-19 12:36:51.274178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:46.070 [2024-11-19 12:36:51.274253] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:46.070 [2024-11-19 12:36:51.274272] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:46.070 [2024-11-19 12:36:51.274287] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:46.070 [2024-11-19 12:36:51.274316] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:46.070 [2024-11-19 12:36:51.276000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:16:46.070 [2024-11-19 12:36:51.277872] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:46.070 spare 00:16:46.070 12:36:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.070 12:36:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.450 "name": "raid_bdev1", 00:16:47.450 "uuid": "95942ec5-313e-40f7-b241-0482704f0d06", 00:16:47.450 "strip_size_kb": 0, 00:16:47.450 "state": "online", 00:16:47.450 "raid_level": "raid1", 00:16:47.450 "superblock": true, 00:16:47.450 "num_base_bdevs": 2, 00:16:47.450 "num_base_bdevs_discovered": 2, 00:16:47.450 "num_base_bdevs_operational": 2, 00:16:47.450 "process": { 00:16:47.450 "type": "rebuild", 00:16:47.450 "target": "spare", 00:16:47.450 "progress": { 00:16:47.450 "blocks": 2560, 00:16:47.450 "percent": 32 00:16:47.450 } 00:16:47.450 }, 00:16:47.450 "base_bdevs_list": [ 00:16:47.450 { 00:16:47.450 "name": "spare", 00:16:47.450 "uuid": "1fcd2b29-54f5-5cbd-bf65-eb7bc12f2d43", 00:16:47.450 "is_configured": true, 00:16:47.450 "data_offset": 256, 00:16:47.450 "data_size": 7936 00:16:47.450 }, 00:16:47.450 { 00:16:47.450 "name": "BaseBdev2", 00:16:47.450 "uuid": "8f62a9e9-9462-52c0-a966-af0e9a301eb7", 00:16:47.450 "is_configured": true, 00:16:47.450 "data_offset": 256, 00:16:47.450 "data_size": 7936 00:16:47.450 } 00:16:47.450 ] 00:16:47.450 }' 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:47.450 [2024-11-19 12:36:52.428513] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:47.450 [2024-11-19 12:36:52.483212] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:47.450 [2024-11-19 12:36:52.483316] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.450 [2024-11-19 12:36:52.483331] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:47.450 [2024-11-19 12:36:52.483340] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.450 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.450 "name": "raid_bdev1", 00:16:47.450 "uuid": "95942ec5-313e-40f7-b241-0482704f0d06", 00:16:47.450 "strip_size_kb": 0, 00:16:47.450 "state": "online", 00:16:47.450 "raid_level": "raid1", 00:16:47.450 "superblock": true, 00:16:47.451 "num_base_bdevs": 2, 00:16:47.451 "num_base_bdevs_discovered": 1, 00:16:47.451 "num_base_bdevs_operational": 1, 00:16:47.451 "base_bdevs_list": [ 00:16:47.451 { 00:16:47.451 "name": null, 00:16:47.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.451 "is_configured": false, 00:16:47.451 "data_offset": 0, 00:16:47.451 "data_size": 7936 00:16:47.451 }, 00:16:47.451 { 00:16:47.451 "name": "BaseBdev2", 00:16:47.451 "uuid": "8f62a9e9-9462-52c0-a966-af0e9a301eb7", 00:16:47.451 "is_configured": true, 00:16:47.451 "data_offset": 256, 00:16:47.451 "data_size": 7936 00:16:47.451 } 00:16:47.451 ] 00:16:47.451 }' 00:16:47.451 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.451 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:47.710 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:47.710 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.710 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:47.710 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:47.710 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.710 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.710 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.710 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.710 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:47.710 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.970 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.970 "name": "raid_bdev1", 00:16:47.970 "uuid": "95942ec5-313e-40f7-b241-0482704f0d06", 00:16:47.970 "strip_size_kb": 0, 00:16:47.970 "state": "online", 00:16:47.970 "raid_level": "raid1", 00:16:47.970 "superblock": true, 00:16:47.970 "num_base_bdevs": 2, 00:16:47.970 "num_base_bdevs_discovered": 1, 00:16:47.970 "num_base_bdevs_operational": 1, 00:16:47.970 "base_bdevs_list": [ 00:16:47.970 { 00:16:47.970 "name": null, 00:16:47.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.970 "is_configured": false, 00:16:47.970 "data_offset": 0, 00:16:47.970 "data_size": 7936 00:16:47.970 }, 00:16:47.970 { 00:16:47.970 "name": "BaseBdev2", 00:16:47.970 "uuid": "8f62a9e9-9462-52c0-a966-af0e9a301eb7", 00:16:47.970 "is_configured": true, 00:16:47.970 "data_offset": 256, 00:16:47.970 "data_size": 7936 00:16:47.970 } 00:16:47.970 ] 00:16:47.970 }' 00:16:47.970 12:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.970 12:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:47.970 12:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.970 12:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:47.970 12:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:47.970 12:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.970 12:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:47.970 12:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.970 12:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:47.970 12:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.970 12:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:47.970 [2024-11-19 12:36:53.109289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:47.970 [2024-11-19 12:36:53.109355] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.970 [2024-11-19 12:36:53.109378] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:47.970 [2024-11-19 12:36:53.109389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.970 [2024-11-19 12:36:53.109600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.970 [2024-11-19 12:36:53.109617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:47.970 [2024-11-19 12:36:53.109687] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:47.970 [2024-11-19 12:36:53.109716] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:47.970 [2024-11-19 12:36:53.109732] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:47.970 [2024-11-19 12:36:53.109757] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:47.970 BaseBdev1 00:16:47.970 12:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.970 12:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:48.907 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:48.907 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.907 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.907 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:48.907 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:48.907 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:48.907 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.907 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.907 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.907 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.907 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.907 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.907 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.907 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:48.907 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.166 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.166 "name": "raid_bdev1", 00:16:49.166 "uuid": "95942ec5-313e-40f7-b241-0482704f0d06", 00:16:49.166 "strip_size_kb": 0, 00:16:49.166 "state": "online", 00:16:49.166 "raid_level": "raid1", 00:16:49.166 "superblock": true, 00:16:49.166 "num_base_bdevs": 2, 00:16:49.166 "num_base_bdevs_discovered": 1, 00:16:49.166 "num_base_bdevs_operational": 1, 00:16:49.166 "base_bdevs_list": [ 00:16:49.166 { 00:16:49.166 "name": null, 00:16:49.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.166 "is_configured": false, 00:16:49.166 "data_offset": 0, 00:16:49.166 "data_size": 7936 00:16:49.166 }, 00:16:49.166 { 00:16:49.166 "name": "BaseBdev2", 00:16:49.166 "uuid": "8f62a9e9-9462-52c0-a966-af0e9a301eb7", 00:16:49.166 "is_configured": true, 00:16:49.166 "data_offset": 256, 00:16:49.166 "data_size": 7936 00:16:49.166 } 00:16:49.166 ] 00:16:49.166 }' 00:16:49.166 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.166 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:49.425 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:49.425 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.425 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:49.425 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:49.425 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.425 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.425 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.425 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.425 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:49.425 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.425 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.425 "name": "raid_bdev1", 00:16:49.425 "uuid": "95942ec5-313e-40f7-b241-0482704f0d06", 00:16:49.425 "strip_size_kb": 0, 00:16:49.425 "state": "online", 00:16:49.425 "raid_level": "raid1", 00:16:49.425 "superblock": true, 00:16:49.425 "num_base_bdevs": 2, 00:16:49.425 "num_base_bdevs_discovered": 1, 00:16:49.425 "num_base_bdevs_operational": 1, 00:16:49.425 "base_bdevs_list": [ 00:16:49.425 { 00:16:49.425 "name": null, 00:16:49.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.425 "is_configured": false, 00:16:49.425 "data_offset": 0, 00:16:49.425 "data_size": 7936 00:16:49.425 }, 00:16:49.425 { 00:16:49.425 "name": "BaseBdev2", 00:16:49.425 "uuid": "8f62a9e9-9462-52c0-a966-af0e9a301eb7", 00:16:49.425 "is_configured": true, 00:16:49.425 "data_offset": 256, 00:16:49.425 "data_size": 7936 00:16:49.425 } 00:16:49.425 ] 00:16:49.425 }' 00:16:49.425 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.425 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:49.425 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.684 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:49.684 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:49.684 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:16:49.684 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:49.684 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:49.684 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:49.684 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:49.684 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:49.684 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:49.684 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.684 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:49.684 [2024-11-19 12:36:54.718613] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:49.684 [2024-11-19 12:36:54.718822] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:49.684 [2024-11-19 12:36:54.718839] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:49.684 request: 00:16:49.684 { 00:16:49.684 "base_bdev": "BaseBdev1", 00:16:49.684 "raid_bdev": "raid_bdev1", 00:16:49.684 "method": "bdev_raid_add_base_bdev", 00:16:49.684 "req_id": 1 00:16:49.684 } 00:16:49.684 Got JSON-RPC error response 00:16:49.684 response: 00:16:49.684 { 00:16:49.684 "code": -22, 00:16:49.684 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:49.684 } 00:16:49.684 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:49.684 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:16:49.684 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:49.684 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:49.684 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:49.684 12:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:50.637 12:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:50.637 12:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.637 12:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.637 12:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.637 12:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.637 12:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:50.637 12:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.637 12:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.637 12:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.637 12:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.637 12:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.637 12:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.637 12:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.637 12:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:50.637 12:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.637 12:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.637 "name": "raid_bdev1", 00:16:50.637 "uuid": "95942ec5-313e-40f7-b241-0482704f0d06", 00:16:50.637 "strip_size_kb": 0, 00:16:50.637 "state": "online", 00:16:50.637 "raid_level": "raid1", 00:16:50.637 "superblock": true, 00:16:50.637 "num_base_bdevs": 2, 00:16:50.637 "num_base_bdevs_discovered": 1, 00:16:50.637 "num_base_bdevs_operational": 1, 00:16:50.637 "base_bdevs_list": [ 00:16:50.637 { 00:16:50.637 "name": null, 00:16:50.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.637 "is_configured": false, 00:16:50.637 "data_offset": 0, 00:16:50.637 "data_size": 7936 00:16:50.637 }, 00:16:50.637 { 00:16:50.637 "name": "BaseBdev2", 00:16:50.637 "uuid": "8f62a9e9-9462-52c0-a966-af0e9a301eb7", 00:16:50.637 "is_configured": true, 00:16:50.637 "data_offset": 256, 00:16:50.637 "data_size": 7936 00:16:50.637 } 00:16:50.637 ] 00:16:50.637 }' 00:16:50.637 12:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.637 12:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:51.205 12:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:51.205 12:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.206 12:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:51.206 12:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:51.206 12:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.206 12:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.206 12:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.206 12:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.206 12:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:51.206 12:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.206 12:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.206 "name": "raid_bdev1", 00:16:51.206 "uuid": "95942ec5-313e-40f7-b241-0482704f0d06", 00:16:51.206 "strip_size_kb": 0, 00:16:51.206 "state": "online", 00:16:51.206 "raid_level": "raid1", 00:16:51.206 "superblock": true, 00:16:51.206 "num_base_bdevs": 2, 00:16:51.206 "num_base_bdevs_discovered": 1, 00:16:51.206 "num_base_bdevs_operational": 1, 00:16:51.206 "base_bdevs_list": [ 00:16:51.206 { 00:16:51.206 "name": null, 00:16:51.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.206 "is_configured": false, 00:16:51.206 "data_offset": 0, 00:16:51.206 "data_size": 7936 00:16:51.206 }, 00:16:51.206 { 00:16:51.206 "name": "BaseBdev2", 00:16:51.206 "uuid": "8f62a9e9-9462-52c0-a966-af0e9a301eb7", 00:16:51.206 "is_configured": true, 00:16:51.206 "data_offset": 256, 00:16:51.206 "data_size": 7936 00:16:51.206 } 00:16:51.206 ] 00:16:51.206 }' 00:16:51.206 12:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.206 12:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:51.206 12:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.206 12:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:51.206 12:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 98305 00:16:51.206 12:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 98305 ']' 00:16:51.206 12:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 98305 00:16:51.206 12:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:16:51.206 12:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:51.206 12:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98305 00:16:51.206 killing process with pid 98305 00:16:51.206 Received shutdown signal, test time was about 60.000000 seconds 00:16:51.206 00:16:51.206 Latency(us) 00:16:51.206 [2024-11-19T12:36:56.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.206 [2024-11-19T12:36:56.467Z] =================================================================================================================== 00:16:51.206 [2024-11-19T12:36:56.467Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:51.206 12:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:51.206 12:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:51.206 12:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98305' 00:16:51.206 12:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 98305 00:16:51.206 [2024-11-19 12:36:56.349284] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:51.206 12:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 98305 00:16:51.206 [2024-11-19 12:36:56.349442] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:51.206 [2024-11-19 12:36:56.349493] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:51.206 [2024-11-19 12:36:56.349503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:16:51.206 [2024-11-19 12:36:56.383661] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:51.466 ************************************ 00:16:51.466 END TEST raid_rebuild_test_sb_md_separate 00:16:51.466 ************************************ 00:16:51.466 12:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:16:51.466 00:16:51.466 real 0m18.547s 00:16:51.466 user 0m24.601s 00:16:51.466 sys 0m2.812s 00:16:51.466 12:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:51.466 12:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:51.466 12:36:56 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:16:51.466 12:36:56 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:16:51.466 12:36:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:51.466 12:36:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:51.466 12:36:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:51.466 ************************************ 00:16:51.466 START TEST raid_state_function_test_sb_md_interleaved 00:16:51.466 ************************************ 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=98989 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 98989' 00:16:51.466 Process raid pid: 98989 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 98989 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 98989 ']' 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:51.466 12:36:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.725 [2024-11-19 12:36:56.791009] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:51.725 [2024-11-19 12:36:56.791602] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.725 [2024-11-19 12:36:56.952359] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.984 [2024-11-19 12:36:57.005960] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.984 [2024-11-19 12:36:57.048066] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:51.984 [2024-11-19 12:36:57.048191] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:52.553 12:36:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:52.553 12:36:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:16:52.553 12:36:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:52.553 12:36:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.553 12:36:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:52.553 [2024-11-19 12:36:57.665372] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:52.553 [2024-11-19 12:36:57.665486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:52.553 [2024-11-19 12:36:57.665519] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:52.553 [2024-11-19 12:36:57.665530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:52.553 12:36:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.553 12:36:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:52.553 12:36:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.553 12:36:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.553 12:36:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.553 12:36:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.553 12:36:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:52.553 12:36:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.554 12:36:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.554 12:36:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.554 12:36:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.554 12:36:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.554 12:36:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.554 12:36:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.554 12:36:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:52.554 12:36:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.554 12:36:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.554 "name": "Existed_Raid", 00:16:52.554 "uuid": "1b8cf084-e2de-41d4-9373-3f1c845e3bdc", 00:16:52.554 "strip_size_kb": 0, 00:16:52.554 "state": "configuring", 00:16:52.554 "raid_level": "raid1", 00:16:52.554 "superblock": true, 00:16:52.554 "num_base_bdevs": 2, 00:16:52.554 "num_base_bdevs_discovered": 0, 00:16:52.554 "num_base_bdevs_operational": 2, 00:16:52.554 "base_bdevs_list": [ 00:16:52.554 { 00:16:52.554 "name": "BaseBdev1", 00:16:52.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.554 "is_configured": false, 00:16:52.554 "data_offset": 0, 00:16:52.554 "data_size": 0 00:16:52.554 }, 00:16:52.554 { 00:16:52.554 "name": "BaseBdev2", 00:16:52.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.554 "is_configured": false, 00:16:52.554 "data_offset": 0, 00:16:52.554 "data_size": 0 00:16:52.554 } 00:16:52.554 ] 00:16:52.554 }' 00:16:52.554 12:36:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.554 12:36:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.122 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:53.122 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.122 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.123 [2024-11-19 12:36:58.136485] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:53.123 [2024-11-19 12:36:58.136602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.123 [2024-11-19 12:36:58.148503] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:53.123 [2024-11-19 12:36:58.148591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:53.123 [2024-11-19 12:36:58.148635] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:53.123 [2024-11-19 12:36:58.148659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.123 [2024-11-19 12:36:58.169456] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:53.123 BaseBdev1 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.123 [ 00:16:53.123 { 00:16:53.123 "name": "BaseBdev1", 00:16:53.123 "aliases": [ 00:16:53.123 "c8c7f803-d83e-44c5-a31b-c33c976596f3" 00:16:53.123 ], 00:16:53.123 "product_name": "Malloc disk", 00:16:53.123 "block_size": 4128, 00:16:53.123 "num_blocks": 8192, 00:16:53.123 "uuid": "c8c7f803-d83e-44c5-a31b-c33c976596f3", 00:16:53.123 "md_size": 32, 00:16:53.123 "md_interleave": true, 00:16:53.123 "dif_type": 0, 00:16:53.123 "assigned_rate_limits": { 00:16:53.123 "rw_ios_per_sec": 0, 00:16:53.123 "rw_mbytes_per_sec": 0, 00:16:53.123 "r_mbytes_per_sec": 0, 00:16:53.123 "w_mbytes_per_sec": 0 00:16:53.123 }, 00:16:53.123 "claimed": true, 00:16:53.123 "claim_type": "exclusive_write", 00:16:53.123 "zoned": false, 00:16:53.123 "supported_io_types": { 00:16:53.123 "read": true, 00:16:53.123 "write": true, 00:16:53.123 "unmap": true, 00:16:53.123 "flush": true, 00:16:53.123 "reset": true, 00:16:53.123 "nvme_admin": false, 00:16:53.123 "nvme_io": false, 00:16:53.123 "nvme_io_md": false, 00:16:53.123 "write_zeroes": true, 00:16:53.123 "zcopy": true, 00:16:53.123 "get_zone_info": false, 00:16:53.123 "zone_management": false, 00:16:53.123 "zone_append": false, 00:16:53.123 "compare": false, 00:16:53.123 "compare_and_write": false, 00:16:53.123 "abort": true, 00:16:53.123 "seek_hole": false, 00:16:53.123 "seek_data": false, 00:16:53.123 "copy": true, 00:16:53.123 "nvme_iov_md": false 00:16:53.123 }, 00:16:53.123 "memory_domains": [ 00:16:53.123 { 00:16:53.123 "dma_device_id": "system", 00:16:53.123 "dma_device_type": 1 00:16:53.123 }, 00:16:53.123 { 00:16:53.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.123 "dma_device_type": 2 00:16:53.123 } 00:16:53.123 ], 00:16:53.123 "driver_specific": {} 00:16:53.123 } 00:16:53.123 ] 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.123 "name": "Existed_Raid", 00:16:53.123 "uuid": "d51c60e0-85e5-4cc0-9b5c-4fa94d86d051", 00:16:53.123 "strip_size_kb": 0, 00:16:53.123 "state": "configuring", 00:16:53.123 "raid_level": "raid1", 00:16:53.123 "superblock": true, 00:16:53.123 "num_base_bdevs": 2, 00:16:53.123 "num_base_bdevs_discovered": 1, 00:16:53.123 "num_base_bdevs_operational": 2, 00:16:53.123 "base_bdevs_list": [ 00:16:53.123 { 00:16:53.123 "name": "BaseBdev1", 00:16:53.123 "uuid": "c8c7f803-d83e-44c5-a31b-c33c976596f3", 00:16:53.123 "is_configured": true, 00:16:53.123 "data_offset": 256, 00:16:53.123 "data_size": 7936 00:16:53.123 }, 00:16:53.123 { 00:16:53.123 "name": "BaseBdev2", 00:16:53.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.123 "is_configured": false, 00:16:53.123 "data_offset": 0, 00:16:53.123 "data_size": 0 00:16:53.123 } 00:16:53.123 ] 00:16:53.123 }' 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.123 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.691 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:53.691 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.691 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.691 [2024-11-19 12:36:58.676701] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:53.691 [2024-11-19 12:36:58.676851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:16:53.691 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.691 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:53.691 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.691 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.691 [2024-11-19 12:36:58.688822] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:53.691 [2024-11-19 12:36:58.690807] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:53.691 [2024-11-19 12:36:58.690886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:53.691 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.691 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:53.691 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:53.691 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:53.691 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.691 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.691 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:53.691 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:53.691 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:53.691 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.691 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.691 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.691 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.691 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.691 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.691 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.691 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.691 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.691 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.691 "name": "Existed_Raid", 00:16:53.691 "uuid": "6ec3633f-6b03-445e-ab89-8049b6a18840", 00:16:53.691 "strip_size_kb": 0, 00:16:53.691 "state": "configuring", 00:16:53.691 "raid_level": "raid1", 00:16:53.691 "superblock": true, 00:16:53.691 "num_base_bdevs": 2, 00:16:53.691 "num_base_bdevs_discovered": 1, 00:16:53.691 "num_base_bdevs_operational": 2, 00:16:53.691 "base_bdevs_list": [ 00:16:53.691 { 00:16:53.691 "name": "BaseBdev1", 00:16:53.691 "uuid": "c8c7f803-d83e-44c5-a31b-c33c976596f3", 00:16:53.691 "is_configured": true, 00:16:53.691 "data_offset": 256, 00:16:53.691 "data_size": 7936 00:16:53.691 }, 00:16:53.691 { 00:16:53.691 "name": "BaseBdev2", 00:16:53.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.691 "is_configured": false, 00:16:53.691 "data_offset": 0, 00:16:53.691 "data_size": 0 00:16:53.691 } 00:16:53.691 ] 00:16:53.691 }' 00:16:53.691 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.691 12:36:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.951 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:16:53.951 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.951 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.951 [2024-11-19 12:36:59.207813] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:53.951 [2024-11-19 12:36:59.208021] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:53.951 [2024-11-19 12:36:59.208042] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:53.951 [2024-11-19 12:36:59.208154] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:53.951 [2024-11-19 12:36:59.208238] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:53.951 [2024-11-19 12:36:59.208253] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:16:53.951 [2024-11-19 12:36:59.208325] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.210 BaseBdev2 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.210 [ 00:16:54.210 { 00:16:54.210 "name": "BaseBdev2", 00:16:54.210 "aliases": [ 00:16:54.210 "45f6cf40-eb0b-4f4f-aed4-a002af21cc7d" 00:16:54.210 ], 00:16:54.210 "product_name": "Malloc disk", 00:16:54.210 "block_size": 4128, 00:16:54.210 "num_blocks": 8192, 00:16:54.210 "uuid": "45f6cf40-eb0b-4f4f-aed4-a002af21cc7d", 00:16:54.210 "md_size": 32, 00:16:54.210 "md_interleave": true, 00:16:54.210 "dif_type": 0, 00:16:54.210 "assigned_rate_limits": { 00:16:54.210 "rw_ios_per_sec": 0, 00:16:54.210 "rw_mbytes_per_sec": 0, 00:16:54.210 "r_mbytes_per_sec": 0, 00:16:54.210 "w_mbytes_per_sec": 0 00:16:54.210 }, 00:16:54.210 "claimed": true, 00:16:54.210 "claim_type": "exclusive_write", 00:16:54.210 "zoned": false, 00:16:54.210 "supported_io_types": { 00:16:54.210 "read": true, 00:16:54.210 "write": true, 00:16:54.210 "unmap": true, 00:16:54.210 "flush": true, 00:16:54.210 "reset": true, 00:16:54.210 "nvme_admin": false, 00:16:54.210 "nvme_io": false, 00:16:54.210 "nvme_io_md": false, 00:16:54.210 "write_zeroes": true, 00:16:54.210 "zcopy": true, 00:16:54.210 "get_zone_info": false, 00:16:54.210 "zone_management": false, 00:16:54.210 "zone_append": false, 00:16:54.210 "compare": false, 00:16:54.210 "compare_and_write": false, 00:16:54.210 "abort": true, 00:16:54.210 "seek_hole": false, 00:16:54.210 "seek_data": false, 00:16:54.210 "copy": true, 00:16:54.210 "nvme_iov_md": false 00:16:54.210 }, 00:16:54.210 "memory_domains": [ 00:16:54.210 { 00:16:54.210 "dma_device_id": "system", 00:16:54.210 "dma_device_type": 1 00:16:54.210 }, 00:16:54.210 { 00:16:54.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.210 "dma_device_type": 2 00:16:54.210 } 00:16:54.210 ], 00:16:54.210 "driver_specific": {} 00:16:54.210 } 00:16:54.210 ] 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.210 "name": "Existed_Raid", 00:16:54.210 "uuid": "6ec3633f-6b03-445e-ab89-8049b6a18840", 00:16:54.210 "strip_size_kb": 0, 00:16:54.210 "state": "online", 00:16:54.210 "raid_level": "raid1", 00:16:54.210 "superblock": true, 00:16:54.210 "num_base_bdevs": 2, 00:16:54.210 "num_base_bdevs_discovered": 2, 00:16:54.210 "num_base_bdevs_operational": 2, 00:16:54.210 "base_bdevs_list": [ 00:16:54.210 { 00:16:54.210 "name": "BaseBdev1", 00:16:54.210 "uuid": "c8c7f803-d83e-44c5-a31b-c33c976596f3", 00:16:54.210 "is_configured": true, 00:16:54.210 "data_offset": 256, 00:16:54.210 "data_size": 7936 00:16:54.210 }, 00:16:54.210 { 00:16:54.210 "name": "BaseBdev2", 00:16:54.210 "uuid": "45f6cf40-eb0b-4f4f-aed4-a002af21cc7d", 00:16:54.210 "is_configured": true, 00:16:54.210 "data_offset": 256, 00:16:54.210 "data_size": 7936 00:16:54.210 } 00:16:54.210 ] 00:16:54.210 }' 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.210 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.469 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:54.469 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:54.469 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:54.469 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:54.469 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:54.469 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:54.469 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:54.469 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:54.469 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.469 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.469 [2024-11-19 12:36:59.691417] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:54.469 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.728 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:54.728 "name": "Existed_Raid", 00:16:54.728 "aliases": [ 00:16:54.728 "6ec3633f-6b03-445e-ab89-8049b6a18840" 00:16:54.728 ], 00:16:54.728 "product_name": "Raid Volume", 00:16:54.728 "block_size": 4128, 00:16:54.728 "num_blocks": 7936, 00:16:54.728 "uuid": "6ec3633f-6b03-445e-ab89-8049b6a18840", 00:16:54.728 "md_size": 32, 00:16:54.728 "md_interleave": true, 00:16:54.728 "dif_type": 0, 00:16:54.728 "assigned_rate_limits": { 00:16:54.728 "rw_ios_per_sec": 0, 00:16:54.728 "rw_mbytes_per_sec": 0, 00:16:54.728 "r_mbytes_per_sec": 0, 00:16:54.728 "w_mbytes_per_sec": 0 00:16:54.728 }, 00:16:54.728 "claimed": false, 00:16:54.728 "zoned": false, 00:16:54.728 "supported_io_types": { 00:16:54.728 "read": true, 00:16:54.728 "write": true, 00:16:54.728 "unmap": false, 00:16:54.728 "flush": false, 00:16:54.728 "reset": true, 00:16:54.728 "nvme_admin": false, 00:16:54.728 "nvme_io": false, 00:16:54.728 "nvme_io_md": false, 00:16:54.728 "write_zeroes": true, 00:16:54.728 "zcopy": false, 00:16:54.728 "get_zone_info": false, 00:16:54.728 "zone_management": false, 00:16:54.728 "zone_append": false, 00:16:54.728 "compare": false, 00:16:54.728 "compare_and_write": false, 00:16:54.728 "abort": false, 00:16:54.728 "seek_hole": false, 00:16:54.728 "seek_data": false, 00:16:54.728 "copy": false, 00:16:54.728 "nvme_iov_md": false 00:16:54.728 }, 00:16:54.728 "memory_domains": [ 00:16:54.728 { 00:16:54.728 "dma_device_id": "system", 00:16:54.728 "dma_device_type": 1 00:16:54.728 }, 00:16:54.728 { 00:16:54.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.728 "dma_device_type": 2 00:16:54.728 }, 00:16:54.728 { 00:16:54.728 "dma_device_id": "system", 00:16:54.728 "dma_device_type": 1 00:16:54.728 }, 00:16:54.728 { 00:16:54.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.728 "dma_device_type": 2 00:16:54.728 } 00:16:54.728 ], 00:16:54.728 "driver_specific": { 00:16:54.728 "raid": { 00:16:54.728 "uuid": "6ec3633f-6b03-445e-ab89-8049b6a18840", 00:16:54.728 "strip_size_kb": 0, 00:16:54.728 "state": "online", 00:16:54.728 "raid_level": "raid1", 00:16:54.728 "superblock": true, 00:16:54.728 "num_base_bdevs": 2, 00:16:54.728 "num_base_bdevs_discovered": 2, 00:16:54.728 "num_base_bdevs_operational": 2, 00:16:54.728 "base_bdevs_list": [ 00:16:54.728 { 00:16:54.728 "name": "BaseBdev1", 00:16:54.728 "uuid": "c8c7f803-d83e-44c5-a31b-c33c976596f3", 00:16:54.728 "is_configured": true, 00:16:54.728 "data_offset": 256, 00:16:54.728 "data_size": 7936 00:16:54.728 }, 00:16:54.728 { 00:16:54.728 "name": "BaseBdev2", 00:16:54.728 "uuid": "45f6cf40-eb0b-4f4f-aed4-a002af21cc7d", 00:16:54.728 "is_configured": true, 00:16:54.728 "data_offset": 256, 00:16:54.728 "data_size": 7936 00:16:54.728 } 00:16:54.728 ] 00:16:54.728 } 00:16:54.728 } 00:16:54.728 }' 00:16:54.728 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:54.728 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:54.728 BaseBdev2' 00:16:54.728 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.728 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:54.728 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.728 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:54.728 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.728 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.728 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.728 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.728 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:54.728 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:54.728 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.728 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:54.728 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.728 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.728 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.728 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.728 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:54.728 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:54.728 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:54.728 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.728 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.728 [2024-11-19 12:36:59.926901] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:54.728 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.728 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:54.729 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:54.729 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:54.729 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:54.729 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:54.729 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:54.729 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.729 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.729 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.729 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.729 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:54.729 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.729 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.729 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.729 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.729 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.729 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.729 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.729 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.729 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.988 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.988 "name": "Existed_Raid", 00:16:54.988 "uuid": "6ec3633f-6b03-445e-ab89-8049b6a18840", 00:16:54.988 "strip_size_kb": 0, 00:16:54.988 "state": "online", 00:16:54.988 "raid_level": "raid1", 00:16:54.988 "superblock": true, 00:16:54.988 "num_base_bdevs": 2, 00:16:54.988 "num_base_bdevs_discovered": 1, 00:16:54.988 "num_base_bdevs_operational": 1, 00:16:54.988 "base_bdevs_list": [ 00:16:54.988 { 00:16:54.988 "name": null, 00:16:54.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.988 "is_configured": false, 00:16:54.988 "data_offset": 0, 00:16:54.988 "data_size": 7936 00:16:54.988 }, 00:16:54.988 { 00:16:54.988 "name": "BaseBdev2", 00:16:54.988 "uuid": "45f6cf40-eb0b-4f4f-aed4-a002af21cc7d", 00:16:54.988 "is_configured": true, 00:16:54.988 "data_offset": 256, 00:16:54.988 "data_size": 7936 00:16:54.988 } 00:16:54.988 ] 00:16:54.988 }' 00:16:54.988 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.988 12:36:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.248 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:55.248 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:55.248 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.248 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.248 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.248 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:55.248 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.248 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:55.248 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:55.248 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:55.248 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.248 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.248 [2024-11-19 12:37:00.417963] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:55.248 [2024-11-19 12:37:00.418080] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:55.248 [2024-11-19 12:37:00.430198] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:55.248 [2024-11-19 12:37:00.430320] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:55.248 [2024-11-19 12:37:00.430362] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:16:55.248 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.248 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:55.248 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:55.248 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.248 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.248 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.248 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:55.248 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.248 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:55.248 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:55.248 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:55.248 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 98989 00:16:55.248 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 98989 ']' 00:16:55.248 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 98989 00:16:55.248 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:16:55.248 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:55.248 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98989 00:16:55.507 killing process with pid 98989 00:16:55.507 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:55.507 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:55.507 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98989' 00:16:55.507 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 98989 00:16:55.507 [2024-11-19 12:37:00.529911] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:55.507 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 98989 00:16:55.507 [2024-11-19 12:37:00.530975] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:55.766 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:16:55.766 00:16:55.766 real 0m4.081s 00:16:55.766 user 0m6.349s 00:16:55.766 sys 0m0.928s 00:16:55.766 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:55.766 12:37:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.766 ************************************ 00:16:55.766 END TEST raid_state_function_test_sb_md_interleaved 00:16:55.766 ************************************ 00:16:55.766 12:37:00 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:16:55.766 12:37:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:55.766 12:37:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:55.766 12:37:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:55.766 ************************************ 00:16:55.766 START TEST raid_superblock_test_md_interleaved 00:16:55.766 ************************************ 00:16:55.766 12:37:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:16:55.766 12:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:55.766 12:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:55.766 12:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:55.766 12:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:55.766 12:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:55.766 12:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:55.766 12:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:55.766 12:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:55.766 12:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:55.766 12:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:55.766 12:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:55.766 12:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:55.766 12:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:55.766 12:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:55.766 12:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:55.766 12:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=99226 00:16:55.766 12:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:55.766 12:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 99226 00:16:55.766 12:37:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99226 ']' 00:16:55.766 12:37:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.766 12:37:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:55.766 12:37:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.766 12:37:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:55.766 12:37:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.766 [2024-11-19 12:37:00.951368] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:55.766 [2024-11-19 12:37:00.951658] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99226 ] 00:16:56.025 [2024-11-19 12:37:01.116708] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.025 [2024-11-19 12:37:01.168966] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.025 [2024-11-19 12:37:01.211446] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:56.025 [2024-11-19 12:37:01.211574] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:56.594 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:56.594 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:16:56.594 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:56.594 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:56.594 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:56.594 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:56.594 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:56.594 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:56.594 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:56.594 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:56.594 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:16:56.594 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.594 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.594 malloc1 00:16:56.594 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.594 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:56.594 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.594 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.594 [2024-11-19 12:37:01.826084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:56.594 [2024-11-19 12:37:01.826155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.594 [2024-11-19 12:37:01.826204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:56.594 [2024-11-19 12:37:01.826219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.594 [2024-11-19 12:37:01.828245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.594 [2024-11-19 12:37:01.828284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:56.594 pt1 00:16:56.594 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.594 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:56.594 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:56.594 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:56.594 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:56.594 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:56.594 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:56.594 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:56.594 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:56.594 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:16:56.594 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.594 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.854 malloc2 00:16:56.854 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.854 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:56.854 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.854 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.854 [2024-11-19 12:37:01.868054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:56.854 [2024-11-19 12:37:01.868165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.854 [2024-11-19 12:37:01.868201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:56.854 [2024-11-19 12:37:01.868231] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.854 [2024-11-19 12:37:01.870157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.854 [2024-11-19 12:37:01.870230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:56.854 pt2 00:16:56.854 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.854 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:56.854 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:56.854 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:56.854 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.854 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.854 [2024-11-19 12:37:01.880080] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:56.854 [2024-11-19 12:37:01.881990] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:56.854 [2024-11-19 12:37:01.882194] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:56.854 [2024-11-19 12:37:01.882243] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:56.854 [2024-11-19 12:37:01.882364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:56.854 [2024-11-19 12:37:01.882478] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:56.854 [2024-11-19 12:37:01.882517] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:56.854 [2024-11-19 12:37:01.882645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.854 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.854 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:56.854 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:56.854 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:56.854 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:56.854 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:56.854 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:56.854 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.854 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.854 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.854 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.854 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.854 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.854 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.854 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.854 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.854 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.854 "name": "raid_bdev1", 00:16:56.854 "uuid": "f4154f35-e41d-4a90-865e-b3e16c44a30d", 00:16:56.854 "strip_size_kb": 0, 00:16:56.854 "state": "online", 00:16:56.854 "raid_level": "raid1", 00:16:56.854 "superblock": true, 00:16:56.854 "num_base_bdevs": 2, 00:16:56.854 "num_base_bdevs_discovered": 2, 00:16:56.854 "num_base_bdevs_operational": 2, 00:16:56.854 "base_bdevs_list": [ 00:16:56.854 { 00:16:56.854 "name": "pt1", 00:16:56.854 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:56.854 "is_configured": true, 00:16:56.854 "data_offset": 256, 00:16:56.854 "data_size": 7936 00:16:56.854 }, 00:16:56.854 { 00:16:56.854 "name": "pt2", 00:16:56.854 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:56.854 "is_configured": true, 00:16:56.854 "data_offset": 256, 00:16:56.854 "data_size": 7936 00:16:56.854 } 00:16:56.854 ] 00:16:56.854 }' 00:16:56.854 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.854 12:37:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.423 [2024-11-19 12:37:02.387551] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:57.423 "name": "raid_bdev1", 00:16:57.423 "aliases": [ 00:16:57.423 "f4154f35-e41d-4a90-865e-b3e16c44a30d" 00:16:57.423 ], 00:16:57.423 "product_name": "Raid Volume", 00:16:57.423 "block_size": 4128, 00:16:57.423 "num_blocks": 7936, 00:16:57.423 "uuid": "f4154f35-e41d-4a90-865e-b3e16c44a30d", 00:16:57.423 "md_size": 32, 00:16:57.423 "md_interleave": true, 00:16:57.423 "dif_type": 0, 00:16:57.423 "assigned_rate_limits": { 00:16:57.423 "rw_ios_per_sec": 0, 00:16:57.423 "rw_mbytes_per_sec": 0, 00:16:57.423 "r_mbytes_per_sec": 0, 00:16:57.423 "w_mbytes_per_sec": 0 00:16:57.423 }, 00:16:57.423 "claimed": false, 00:16:57.423 "zoned": false, 00:16:57.423 "supported_io_types": { 00:16:57.423 "read": true, 00:16:57.423 "write": true, 00:16:57.423 "unmap": false, 00:16:57.423 "flush": false, 00:16:57.423 "reset": true, 00:16:57.423 "nvme_admin": false, 00:16:57.423 "nvme_io": false, 00:16:57.423 "nvme_io_md": false, 00:16:57.423 "write_zeroes": true, 00:16:57.423 "zcopy": false, 00:16:57.423 "get_zone_info": false, 00:16:57.423 "zone_management": false, 00:16:57.423 "zone_append": false, 00:16:57.423 "compare": false, 00:16:57.423 "compare_and_write": false, 00:16:57.423 "abort": false, 00:16:57.423 "seek_hole": false, 00:16:57.423 "seek_data": false, 00:16:57.423 "copy": false, 00:16:57.423 "nvme_iov_md": false 00:16:57.423 }, 00:16:57.423 "memory_domains": [ 00:16:57.423 { 00:16:57.423 "dma_device_id": "system", 00:16:57.423 "dma_device_type": 1 00:16:57.423 }, 00:16:57.423 { 00:16:57.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.423 "dma_device_type": 2 00:16:57.423 }, 00:16:57.423 { 00:16:57.423 "dma_device_id": "system", 00:16:57.423 "dma_device_type": 1 00:16:57.423 }, 00:16:57.423 { 00:16:57.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.423 "dma_device_type": 2 00:16:57.423 } 00:16:57.423 ], 00:16:57.423 "driver_specific": { 00:16:57.423 "raid": { 00:16:57.423 "uuid": "f4154f35-e41d-4a90-865e-b3e16c44a30d", 00:16:57.423 "strip_size_kb": 0, 00:16:57.423 "state": "online", 00:16:57.423 "raid_level": "raid1", 00:16:57.423 "superblock": true, 00:16:57.423 "num_base_bdevs": 2, 00:16:57.423 "num_base_bdevs_discovered": 2, 00:16:57.423 "num_base_bdevs_operational": 2, 00:16:57.423 "base_bdevs_list": [ 00:16:57.423 { 00:16:57.423 "name": "pt1", 00:16:57.423 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:57.423 "is_configured": true, 00:16:57.423 "data_offset": 256, 00:16:57.423 "data_size": 7936 00:16:57.423 }, 00:16:57.423 { 00:16:57.423 "name": "pt2", 00:16:57.423 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:57.423 "is_configured": true, 00:16:57.423 "data_offset": 256, 00:16:57.423 "data_size": 7936 00:16:57.423 } 00:16:57.423 ] 00:16:57.423 } 00:16:57.423 } 00:16:57.423 }' 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:57.423 pt2' 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:57.423 [2024-11-19 12:37:02.623195] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f4154f35-e41d-4a90-865e-b3e16c44a30d 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z f4154f35-e41d-4a90-865e-b3e16c44a30d ']' 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.423 [2024-11-19 12:37:02.650945] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:57.423 [2024-11-19 12:37:02.650982] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:57.423 [2024-11-19 12:37:02.651087] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:57.423 [2024-11-19 12:37:02.651190] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:57.423 [2024-11-19 12:37:02.651207] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:57.423 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.424 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.424 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.424 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.683 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:57.683 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:57.683 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:57.683 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:57.683 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.683 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.683 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.684 [2024-11-19 12:37:02.782734] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:57.684 [2024-11-19 12:37:02.784947] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:57.684 [2024-11-19 12:37:02.785085] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:57.684 [2024-11-19 12:37:02.785204] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:57.684 [2024-11-19 12:37:02.785277] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:57.684 [2024-11-19 12:37:02.785328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:16:57.684 request: 00:16:57.684 { 00:16:57.684 "name": "raid_bdev1", 00:16:57.684 "raid_level": "raid1", 00:16:57.684 "base_bdevs": [ 00:16:57.684 "malloc1", 00:16:57.684 "malloc2" 00:16:57.684 ], 00:16:57.684 "superblock": false, 00:16:57.684 "method": "bdev_raid_create", 00:16:57.684 "req_id": 1 00:16:57.684 } 00:16:57.684 Got JSON-RPC error response 00:16:57.684 response: 00:16:57.684 { 00:16:57.684 "code": -17, 00:16:57.684 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:57.684 } 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.684 [2024-11-19 12:37:02.850524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:57.684 [2024-11-19 12:37:02.850669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.684 [2024-11-19 12:37:02.850710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:57.684 [2024-11-19 12:37:02.850757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.684 [2024-11-19 12:37:02.852773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.684 [2024-11-19 12:37:02.852844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:57.684 [2024-11-19 12:37:02.852912] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:57.684 [2024-11-19 12:37:02.852959] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:57.684 pt1 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.684 "name": "raid_bdev1", 00:16:57.684 "uuid": "f4154f35-e41d-4a90-865e-b3e16c44a30d", 00:16:57.684 "strip_size_kb": 0, 00:16:57.684 "state": "configuring", 00:16:57.684 "raid_level": "raid1", 00:16:57.684 "superblock": true, 00:16:57.684 "num_base_bdevs": 2, 00:16:57.684 "num_base_bdevs_discovered": 1, 00:16:57.684 "num_base_bdevs_operational": 2, 00:16:57.684 "base_bdevs_list": [ 00:16:57.684 { 00:16:57.684 "name": "pt1", 00:16:57.684 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:57.684 "is_configured": true, 00:16:57.684 "data_offset": 256, 00:16:57.684 "data_size": 7936 00:16:57.684 }, 00:16:57.684 { 00:16:57.684 "name": null, 00:16:57.684 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:57.684 "is_configured": false, 00:16:57.684 "data_offset": 256, 00:16:57.684 "data_size": 7936 00:16:57.684 } 00:16:57.684 ] 00:16:57.684 }' 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.684 12:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.253 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:58.253 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:58.253 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:58.253 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:58.253 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.253 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.253 [2024-11-19 12:37:03.257853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:58.253 [2024-11-19 12:37:03.257932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.253 [2024-11-19 12:37:03.257958] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:58.253 [2024-11-19 12:37:03.257966] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.253 [2024-11-19 12:37:03.258146] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.253 [2024-11-19 12:37:03.258157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:58.253 [2024-11-19 12:37:03.258210] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:58.253 [2024-11-19 12:37:03.258230] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:58.253 [2024-11-19 12:37:03.258318] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:58.253 [2024-11-19 12:37:03.258328] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:58.253 [2024-11-19 12:37:03.258407] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:58.253 [2024-11-19 12:37:03.258464] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:58.253 [2024-11-19 12:37:03.258476] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:16:58.253 [2024-11-19 12:37:03.258533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.253 pt2 00:16:58.253 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.253 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:58.253 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:58.253 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:58.253 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.253 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.253 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:58.253 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:58.253 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:58.253 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.253 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.253 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.253 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.253 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.253 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.253 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.253 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.253 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.253 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.253 "name": "raid_bdev1", 00:16:58.253 "uuid": "f4154f35-e41d-4a90-865e-b3e16c44a30d", 00:16:58.253 "strip_size_kb": 0, 00:16:58.253 "state": "online", 00:16:58.253 "raid_level": "raid1", 00:16:58.253 "superblock": true, 00:16:58.253 "num_base_bdevs": 2, 00:16:58.253 "num_base_bdevs_discovered": 2, 00:16:58.253 "num_base_bdevs_operational": 2, 00:16:58.253 "base_bdevs_list": [ 00:16:58.253 { 00:16:58.253 "name": "pt1", 00:16:58.253 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:58.253 "is_configured": true, 00:16:58.253 "data_offset": 256, 00:16:58.253 "data_size": 7936 00:16:58.253 }, 00:16:58.253 { 00:16:58.253 "name": "pt2", 00:16:58.253 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.253 "is_configured": true, 00:16:58.253 "data_offset": 256, 00:16:58.253 "data_size": 7936 00:16:58.253 } 00:16:58.253 ] 00:16:58.253 }' 00:16:58.253 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.253 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.513 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:58.513 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:58.513 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:58.513 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:58.513 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:58.513 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:58.513 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:58.513 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.513 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.513 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:58.513 [2024-11-19 12:37:03.721327] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.513 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.513 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:58.513 "name": "raid_bdev1", 00:16:58.513 "aliases": [ 00:16:58.513 "f4154f35-e41d-4a90-865e-b3e16c44a30d" 00:16:58.513 ], 00:16:58.513 "product_name": "Raid Volume", 00:16:58.513 "block_size": 4128, 00:16:58.513 "num_blocks": 7936, 00:16:58.513 "uuid": "f4154f35-e41d-4a90-865e-b3e16c44a30d", 00:16:58.513 "md_size": 32, 00:16:58.513 "md_interleave": true, 00:16:58.513 "dif_type": 0, 00:16:58.513 "assigned_rate_limits": { 00:16:58.513 "rw_ios_per_sec": 0, 00:16:58.513 "rw_mbytes_per_sec": 0, 00:16:58.513 "r_mbytes_per_sec": 0, 00:16:58.513 "w_mbytes_per_sec": 0 00:16:58.513 }, 00:16:58.513 "claimed": false, 00:16:58.513 "zoned": false, 00:16:58.513 "supported_io_types": { 00:16:58.513 "read": true, 00:16:58.513 "write": true, 00:16:58.513 "unmap": false, 00:16:58.513 "flush": false, 00:16:58.513 "reset": true, 00:16:58.513 "nvme_admin": false, 00:16:58.513 "nvme_io": false, 00:16:58.513 "nvme_io_md": false, 00:16:58.513 "write_zeroes": true, 00:16:58.513 "zcopy": false, 00:16:58.513 "get_zone_info": false, 00:16:58.513 "zone_management": false, 00:16:58.513 "zone_append": false, 00:16:58.513 "compare": false, 00:16:58.513 "compare_and_write": false, 00:16:58.513 "abort": false, 00:16:58.513 "seek_hole": false, 00:16:58.513 "seek_data": false, 00:16:58.513 "copy": false, 00:16:58.513 "nvme_iov_md": false 00:16:58.513 }, 00:16:58.513 "memory_domains": [ 00:16:58.513 { 00:16:58.513 "dma_device_id": "system", 00:16:58.513 "dma_device_type": 1 00:16:58.513 }, 00:16:58.513 { 00:16:58.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.513 "dma_device_type": 2 00:16:58.513 }, 00:16:58.513 { 00:16:58.513 "dma_device_id": "system", 00:16:58.513 "dma_device_type": 1 00:16:58.513 }, 00:16:58.513 { 00:16:58.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.513 "dma_device_type": 2 00:16:58.513 } 00:16:58.513 ], 00:16:58.513 "driver_specific": { 00:16:58.513 "raid": { 00:16:58.513 "uuid": "f4154f35-e41d-4a90-865e-b3e16c44a30d", 00:16:58.513 "strip_size_kb": 0, 00:16:58.513 "state": "online", 00:16:58.513 "raid_level": "raid1", 00:16:58.513 "superblock": true, 00:16:58.513 "num_base_bdevs": 2, 00:16:58.513 "num_base_bdevs_discovered": 2, 00:16:58.513 "num_base_bdevs_operational": 2, 00:16:58.513 "base_bdevs_list": [ 00:16:58.513 { 00:16:58.513 "name": "pt1", 00:16:58.513 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:58.513 "is_configured": true, 00:16:58.513 "data_offset": 256, 00:16:58.513 "data_size": 7936 00:16:58.513 }, 00:16:58.513 { 00:16:58.513 "name": "pt2", 00:16:58.513 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.513 "is_configured": true, 00:16:58.513 "data_offset": 256, 00:16:58.513 "data_size": 7936 00:16:58.513 } 00:16:58.513 ] 00:16:58.513 } 00:16:58.513 } 00:16:58.513 }' 00:16:58.514 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:58.773 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:58.773 pt2' 00:16:58.773 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.773 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:58.773 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.773 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:58.773 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.773 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.773 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.773 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.773 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:58.773 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:58.773 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.773 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:58.773 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.773 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.773 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.773 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.773 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:58.773 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:58.773 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:58.773 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.773 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.773 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:58.773 [2024-11-19 12:37:03.956949] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.773 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.773 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' f4154f35-e41d-4a90-865e-b3e16c44a30d '!=' f4154f35-e41d-4a90-865e-b3e16c44a30d ']' 00:16:58.773 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:58.773 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:58.773 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:58.773 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:58.773 12:37:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.773 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.773 [2024-11-19 12:37:04.004593] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:58.773 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.773 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:58.773 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.773 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.773 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:58.773 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:58.773 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:58.773 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.773 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.773 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.773 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.773 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.773 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.773 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.773 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.773 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.032 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.032 "name": "raid_bdev1", 00:16:59.032 "uuid": "f4154f35-e41d-4a90-865e-b3e16c44a30d", 00:16:59.032 "strip_size_kb": 0, 00:16:59.032 "state": "online", 00:16:59.032 "raid_level": "raid1", 00:16:59.032 "superblock": true, 00:16:59.032 "num_base_bdevs": 2, 00:16:59.032 "num_base_bdevs_discovered": 1, 00:16:59.032 "num_base_bdevs_operational": 1, 00:16:59.032 "base_bdevs_list": [ 00:16:59.032 { 00:16:59.032 "name": null, 00:16:59.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.032 "is_configured": false, 00:16:59.032 "data_offset": 0, 00:16:59.032 "data_size": 7936 00:16:59.032 }, 00:16:59.032 { 00:16:59.032 "name": "pt2", 00:16:59.032 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.032 "is_configured": true, 00:16:59.032 "data_offset": 256, 00:16:59.032 "data_size": 7936 00:16:59.032 } 00:16:59.032 ] 00:16:59.032 }' 00:16:59.032 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.032 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.291 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:59.291 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.291 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.291 [2024-11-19 12:37:04.471785] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:59.291 [2024-11-19 12:37:04.471884] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:59.291 [2024-11-19 12:37:04.472005] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:59.291 [2024-11-19 12:37:04.472078] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:59.291 [2024-11-19 12:37:04.472144] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:16:59.291 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.291 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.291 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:59.291 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.291 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.291 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.291 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:59.291 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:59.291 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:59.291 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:59.291 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:59.291 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.291 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.291 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.291 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:59.291 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:59.291 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:59.291 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:59.291 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:16:59.291 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:59.291 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.291 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.291 [2024-11-19 12:37:04.547654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:59.291 [2024-11-19 12:37:04.547726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.291 [2024-11-19 12:37:04.547759] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:59.291 [2024-11-19 12:37:04.547771] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.550 [2024-11-19 12:37:04.549921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.550 [2024-11-19 12:37:04.549959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:59.550 [2024-11-19 12:37:04.550019] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:59.550 [2024-11-19 12:37:04.550056] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:59.550 [2024-11-19 12:37:04.550126] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:16:59.551 [2024-11-19 12:37:04.550137] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:59.551 [2024-11-19 12:37:04.550235] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:59.551 [2024-11-19 12:37:04.550307] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:16:59.551 [2024-11-19 12:37:04.550317] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:16:59.551 [2024-11-19 12:37:04.550380] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.551 pt2 00:16:59.551 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.551 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:59.551 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.551 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.551 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.551 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.551 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:59.551 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.551 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.551 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.551 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.551 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.551 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.551 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.551 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.551 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.551 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.551 "name": "raid_bdev1", 00:16:59.551 "uuid": "f4154f35-e41d-4a90-865e-b3e16c44a30d", 00:16:59.551 "strip_size_kb": 0, 00:16:59.551 "state": "online", 00:16:59.551 "raid_level": "raid1", 00:16:59.551 "superblock": true, 00:16:59.551 "num_base_bdevs": 2, 00:16:59.551 "num_base_bdevs_discovered": 1, 00:16:59.551 "num_base_bdevs_operational": 1, 00:16:59.551 "base_bdevs_list": [ 00:16:59.551 { 00:16:59.551 "name": null, 00:16:59.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.551 "is_configured": false, 00:16:59.551 "data_offset": 256, 00:16:59.551 "data_size": 7936 00:16:59.551 }, 00:16:59.551 { 00:16:59.551 "name": "pt2", 00:16:59.551 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.551 "is_configured": true, 00:16:59.551 "data_offset": 256, 00:16:59.551 "data_size": 7936 00:16:59.551 } 00:16:59.551 ] 00:16:59.551 }' 00:16:59.551 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.551 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.862 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:59.862 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.862 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.862 [2024-11-19 12:37:04.994915] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:59.862 [2024-11-19 12:37:04.994953] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:59.862 [2024-11-19 12:37:04.995048] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:59.862 [2024-11-19 12:37:04.995099] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:59.862 [2024-11-19 12:37:04.995115] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:16:59.862 12:37:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.862 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:59.862 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.862 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.862 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.862 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.862 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:59.862 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:59.862 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:59.862 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:59.862 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.862 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.862 [2024-11-19 12:37:05.038883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:59.862 [2024-11-19 12:37:05.039106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.862 [2024-11-19 12:37:05.039181] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:59.862 [2024-11-19 12:37:05.039241] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.862 [2024-11-19 12:37:05.041285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.862 [2024-11-19 12:37:05.041396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:59.862 [2024-11-19 12:37:05.041504] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:59.862 [2024-11-19 12:37:05.041560] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:59.862 [2024-11-19 12:37:05.041651] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:59.862 [2024-11-19 12:37:05.041681] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:59.862 [2024-11-19 12:37:05.041702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:16:59.862 [2024-11-19 12:37:05.041762] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:59.862 [2024-11-19 12:37:05.041836] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:59.862 [2024-11-19 12:37:05.041851] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:59.862 [2024-11-19 12:37:05.041920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:59.862 [2024-11-19 12:37:05.041980] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:59.862 [2024-11-19 12:37:05.041994] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:59.862 [2024-11-19 12:37:05.042064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.862 pt1 00:16:59.862 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.862 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:59.862 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:59.862 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.862 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.862 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.862 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.862 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:59.862 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.862 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.862 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.862 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.862 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.862 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.862 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.862 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.862 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.862 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.862 "name": "raid_bdev1", 00:16:59.862 "uuid": "f4154f35-e41d-4a90-865e-b3e16c44a30d", 00:16:59.862 "strip_size_kb": 0, 00:16:59.862 "state": "online", 00:16:59.862 "raid_level": "raid1", 00:16:59.862 "superblock": true, 00:16:59.862 "num_base_bdevs": 2, 00:16:59.862 "num_base_bdevs_discovered": 1, 00:16:59.862 "num_base_bdevs_operational": 1, 00:16:59.862 "base_bdevs_list": [ 00:16:59.862 { 00:16:59.862 "name": null, 00:16:59.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.862 "is_configured": false, 00:16:59.862 "data_offset": 256, 00:16:59.862 "data_size": 7936 00:16:59.862 }, 00:16:59.862 { 00:16:59.862 "name": "pt2", 00:16:59.862 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.863 "is_configured": true, 00:16:59.863 "data_offset": 256, 00:16:59.863 "data_size": 7936 00:16:59.863 } 00:16:59.863 ] 00:16:59.863 }' 00:16:59.863 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.863 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.432 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:00.432 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:00.432 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.432 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.432 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.432 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:00.432 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:00.432 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:00.432 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.432 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.432 [2024-11-19 12:37:05.566453] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:00.432 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.432 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' f4154f35-e41d-4a90-865e-b3e16c44a30d '!=' f4154f35-e41d-4a90-865e-b3e16c44a30d ']' 00:17:00.432 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 99226 00:17:00.432 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99226 ']' 00:17:00.432 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99226 00:17:00.432 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:17:00.432 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:00.432 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99226 00:17:00.432 killing process with pid 99226 00:17:00.432 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:00.432 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:00.432 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99226' 00:17:00.432 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 99226 00:17:00.432 [2024-11-19 12:37:05.639916] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:00.432 [2024-11-19 12:37:05.640046] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.432 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 99226 00:17:00.432 [2024-11-19 12:37:05.640115] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.432 [2024-11-19 12:37:05.640125] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:17:00.432 [2024-11-19 12:37:05.663949] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:00.697 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:17:00.697 00:17:00.697 real 0m5.056s 00:17:00.697 user 0m8.164s 00:17:00.697 sys 0m1.184s 00:17:00.697 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:00.697 12:37:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.697 ************************************ 00:17:00.697 END TEST raid_superblock_test_md_interleaved 00:17:00.697 ************************************ 00:17:00.955 12:37:05 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:17:00.955 12:37:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:00.955 12:37:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:00.955 12:37:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:00.955 ************************************ 00:17:00.955 START TEST raid_rebuild_test_sb_md_interleaved 00:17:00.955 ************************************ 00:17:00.955 12:37:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:17:00.955 12:37:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:00.955 12:37:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:00.955 12:37:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:00.955 12:37:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:00.955 12:37:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:17:00.955 12:37:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:00.955 12:37:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:00.955 12:37:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:00.955 12:37:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:00.955 12:37:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:00.955 12:37:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:00.955 12:37:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:00.955 12:37:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:00.955 12:37:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:00.955 12:37:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:00.955 12:37:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:00.955 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:00.955 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:00.955 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:00.955 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:00.955 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:00.955 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:00.955 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:00.955 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:00.955 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=99543 00:17:00.955 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 99543 00:17:00.955 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:00.955 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99543 ']' 00:17:00.955 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.955 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:00.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.955 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.955 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:00.955 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.955 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:00.955 Zero copy mechanism will not be used. 00:17:00.955 [2024-11-19 12:37:06.087293] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:00.955 [2024-11-19 12:37:06.087433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99543 ] 00:17:01.214 [2024-11-19 12:37:06.248699] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.214 [2024-11-19 12:37:06.300840] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.214 [2024-11-19 12:37:06.342394] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.214 [2024-11-19 12:37:06.342435] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.782 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:01.782 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:17:01.782 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:01.782 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:17:01.782 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.782 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.782 BaseBdev1_malloc 00:17:01.782 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.782 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:01.782 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.782 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.782 [2024-11-19 12:37:06.968391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:01.782 [2024-11-19 12:37:06.968458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.782 [2024-11-19 12:37:06.968489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:01.782 [2024-11-19 12:37:06.968499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.782 [2024-11-19 12:37:06.970362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.782 [2024-11-19 12:37:06.970397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:01.782 BaseBdev1 00:17:01.782 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.782 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:01.782 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:17:01.782 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.782 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.782 BaseBdev2_malloc 00:17:01.782 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.782 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:01.782 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.782 12:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.782 [2024-11-19 12:37:07.005128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:01.782 [2024-11-19 12:37:07.005192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.782 [2024-11-19 12:37:07.005218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:01.782 [2024-11-19 12:37:07.005227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.782 [2024-11-19 12:37:07.007059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.782 [2024-11-19 12:37:07.007094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:01.782 BaseBdev2 00:17:01.782 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.782 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:17:01.782 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.782 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.782 spare_malloc 00:17:01.782 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.782 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:01.782 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.782 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.041 spare_delay 00:17:02.041 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.041 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:02.041 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.041 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.041 [2024-11-19 12:37:07.046019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:02.041 [2024-11-19 12:37:07.046090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.041 [2024-11-19 12:37:07.046119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:02.041 [2024-11-19 12:37:07.046128] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.041 [2024-11-19 12:37:07.048084] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.041 [2024-11-19 12:37:07.048121] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:02.041 spare 00:17:02.041 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.041 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:02.041 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.041 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.041 [2024-11-19 12:37:07.058019] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:02.041 [2024-11-19 12:37:07.059854] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:02.041 [2024-11-19 12:37:07.060032] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:17:02.041 [2024-11-19 12:37:07.060046] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:02.041 [2024-11-19 12:37:07.060146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:17:02.041 [2024-11-19 12:37:07.060219] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:17:02.041 [2024-11-19 12:37:07.060233] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:17:02.041 [2024-11-19 12:37:07.060309] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.041 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.041 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:02.041 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.041 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.041 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.041 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.041 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:02.041 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.041 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.041 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.042 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.042 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.042 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.042 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.042 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.042 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.042 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.042 "name": "raid_bdev1", 00:17:02.042 "uuid": "ddf8e9fc-6b33-47b0-a364-71daa34124f4", 00:17:02.042 "strip_size_kb": 0, 00:17:02.042 "state": "online", 00:17:02.042 "raid_level": "raid1", 00:17:02.042 "superblock": true, 00:17:02.042 "num_base_bdevs": 2, 00:17:02.042 "num_base_bdevs_discovered": 2, 00:17:02.042 "num_base_bdevs_operational": 2, 00:17:02.042 "base_bdevs_list": [ 00:17:02.042 { 00:17:02.042 "name": "BaseBdev1", 00:17:02.042 "uuid": "6747653c-4923-5a23-847d-49acafafc8ce", 00:17:02.042 "is_configured": true, 00:17:02.042 "data_offset": 256, 00:17:02.042 "data_size": 7936 00:17:02.042 }, 00:17:02.042 { 00:17:02.042 "name": "BaseBdev2", 00:17:02.042 "uuid": "fbd61f40-a6e5-5a4f-8fc4-be48555a2ff4", 00:17:02.042 "is_configured": true, 00:17:02.042 "data_offset": 256, 00:17:02.042 "data_size": 7936 00:17:02.042 } 00:17:02.042 ] 00:17:02.042 }' 00:17:02.042 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.042 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.301 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:02.301 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:02.301 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.301 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.301 [2024-11-19 12:37:07.493587] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:02.301 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.301 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:02.301 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.301 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.301 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.301 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:02.301 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.560 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:02.560 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:02.560 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:17:02.560 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:02.560 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.560 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.560 [2024-11-19 12:37:07.569138] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:02.560 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.560 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:02.560 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.560 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.560 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.560 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.560 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:02.560 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.560 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.560 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.560 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.560 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.560 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.560 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.560 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.560 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.560 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.560 "name": "raid_bdev1", 00:17:02.560 "uuid": "ddf8e9fc-6b33-47b0-a364-71daa34124f4", 00:17:02.560 "strip_size_kb": 0, 00:17:02.560 "state": "online", 00:17:02.560 "raid_level": "raid1", 00:17:02.560 "superblock": true, 00:17:02.560 "num_base_bdevs": 2, 00:17:02.560 "num_base_bdevs_discovered": 1, 00:17:02.560 "num_base_bdevs_operational": 1, 00:17:02.560 "base_bdevs_list": [ 00:17:02.560 { 00:17:02.560 "name": null, 00:17:02.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.560 "is_configured": false, 00:17:02.560 "data_offset": 0, 00:17:02.560 "data_size": 7936 00:17:02.560 }, 00:17:02.560 { 00:17:02.561 "name": "BaseBdev2", 00:17:02.561 "uuid": "fbd61f40-a6e5-5a4f-8fc4-be48555a2ff4", 00:17:02.561 "is_configured": true, 00:17:02.561 "data_offset": 256, 00:17:02.561 "data_size": 7936 00:17:02.561 } 00:17:02.561 ] 00:17:02.561 }' 00:17:02.561 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.561 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.819 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:02.820 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.820 12:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.820 [2024-11-19 12:37:08.004385] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:02.820 [2024-11-19 12:37:08.007501] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:02.820 [2024-11-19 12:37:08.009491] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:02.820 12:37:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.820 12:37:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:04.197 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:04.197 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.197 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:04.197 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:04.197 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.197 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.197 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.197 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.197 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:04.197 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.197 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.197 "name": "raid_bdev1", 00:17:04.197 "uuid": "ddf8e9fc-6b33-47b0-a364-71daa34124f4", 00:17:04.197 "strip_size_kb": 0, 00:17:04.197 "state": "online", 00:17:04.197 "raid_level": "raid1", 00:17:04.197 "superblock": true, 00:17:04.197 "num_base_bdevs": 2, 00:17:04.197 "num_base_bdevs_discovered": 2, 00:17:04.197 "num_base_bdevs_operational": 2, 00:17:04.197 "process": { 00:17:04.197 "type": "rebuild", 00:17:04.197 "target": "spare", 00:17:04.197 "progress": { 00:17:04.197 "blocks": 2560, 00:17:04.197 "percent": 32 00:17:04.197 } 00:17:04.197 }, 00:17:04.197 "base_bdevs_list": [ 00:17:04.197 { 00:17:04.197 "name": "spare", 00:17:04.197 "uuid": "2c85587a-03f9-5c52-bf2b-f58a67febfc0", 00:17:04.197 "is_configured": true, 00:17:04.197 "data_offset": 256, 00:17:04.197 "data_size": 7936 00:17:04.197 }, 00:17:04.197 { 00:17:04.197 "name": "BaseBdev2", 00:17:04.197 "uuid": "fbd61f40-a6e5-5a4f-8fc4-be48555a2ff4", 00:17:04.197 "is_configured": true, 00:17:04.197 "data_offset": 256, 00:17:04.197 "data_size": 7936 00:17:04.197 } 00:17:04.197 ] 00:17:04.197 }' 00:17:04.197 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.197 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:04.197 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.197 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.197 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:04.197 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.197 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:04.197 [2024-11-19 12:37:09.172510] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:04.197 [2024-11-19 12:37:09.215653] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:04.197 [2024-11-19 12:37:09.215755] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.197 [2024-11-19 12:37:09.215778] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:04.197 [2024-11-19 12:37:09.215787] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:04.197 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.197 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:04.197 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.197 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.197 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.197 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.198 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:04.198 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.198 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.198 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.198 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.198 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.198 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.198 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.198 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:04.198 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.198 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.198 "name": "raid_bdev1", 00:17:04.198 "uuid": "ddf8e9fc-6b33-47b0-a364-71daa34124f4", 00:17:04.198 "strip_size_kb": 0, 00:17:04.198 "state": "online", 00:17:04.198 "raid_level": "raid1", 00:17:04.198 "superblock": true, 00:17:04.198 "num_base_bdevs": 2, 00:17:04.198 "num_base_bdevs_discovered": 1, 00:17:04.198 "num_base_bdevs_operational": 1, 00:17:04.198 "base_bdevs_list": [ 00:17:04.198 { 00:17:04.198 "name": null, 00:17:04.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.198 "is_configured": false, 00:17:04.198 "data_offset": 0, 00:17:04.198 "data_size": 7936 00:17:04.198 }, 00:17:04.198 { 00:17:04.198 "name": "BaseBdev2", 00:17:04.198 "uuid": "fbd61f40-a6e5-5a4f-8fc4-be48555a2ff4", 00:17:04.198 "is_configured": true, 00:17:04.198 "data_offset": 256, 00:17:04.198 "data_size": 7936 00:17:04.198 } 00:17:04.198 ] 00:17:04.198 }' 00:17:04.198 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.198 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:04.456 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:04.456 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.456 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:04.456 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:04.456 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.456 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.456 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.456 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.456 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:04.456 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.456 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.456 "name": "raid_bdev1", 00:17:04.456 "uuid": "ddf8e9fc-6b33-47b0-a364-71daa34124f4", 00:17:04.456 "strip_size_kb": 0, 00:17:04.456 "state": "online", 00:17:04.456 "raid_level": "raid1", 00:17:04.456 "superblock": true, 00:17:04.456 "num_base_bdevs": 2, 00:17:04.456 "num_base_bdevs_discovered": 1, 00:17:04.456 "num_base_bdevs_operational": 1, 00:17:04.456 "base_bdevs_list": [ 00:17:04.456 { 00:17:04.456 "name": null, 00:17:04.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.456 "is_configured": false, 00:17:04.456 "data_offset": 0, 00:17:04.456 "data_size": 7936 00:17:04.456 }, 00:17:04.456 { 00:17:04.456 "name": "BaseBdev2", 00:17:04.456 "uuid": "fbd61f40-a6e5-5a4f-8fc4-be48555a2ff4", 00:17:04.456 "is_configured": true, 00:17:04.456 "data_offset": 256, 00:17:04.456 "data_size": 7936 00:17:04.456 } 00:17:04.456 ] 00:17:04.456 }' 00:17:04.456 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.457 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:04.715 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.716 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:04.716 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:04.716 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.716 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:04.716 [2024-11-19 12:37:09.750976] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:04.716 [2024-11-19 12:37:09.753933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:04.716 [2024-11-19 12:37:09.755919] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:04.716 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.716 12:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:05.651 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:05.651 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.651 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:05.651 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:05.651 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.651 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.651 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.651 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.651 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.651 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.651 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.651 "name": "raid_bdev1", 00:17:05.651 "uuid": "ddf8e9fc-6b33-47b0-a364-71daa34124f4", 00:17:05.651 "strip_size_kb": 0, 00:17:05.651 "state": "online", 00:17:05.651 "raid_level": "raid1", 00:17:05.651 "superblock": true, 00:17:05.651 "num_base_bdevs": 2, 00:17:05.651 "num_base_bdevs_discovered": 2, 00:17:05.651 "num_base_bdevs_operational": 2, 00:17:05.651 "process": { 00:17:05.651 "type": "rebuild", 00:17:05.651 "target": "spare", 00:17:05.651 "progress": { 00:17:05.651 "blocks": 2560, 00:17:05.651 "percent": 32 00:17:05.651 } 00:17:05.651 }, 00:17:05.651 "base_bdevs_list": [ 00:17:05.651 { 00:17:05.651 "name": "spare", 00:17:05.651 "uuid": "2c85587a-03f9-5c52-bf2b-f58a67febfc0", 00:17:05.651 "is_configured": true, 00:17:05.651 "data_offset": 256, 00:17:05.651 "data_size": 7936 00:17:05.651 }, 00:17:05.651 { 00:17:05.651 "name": "BaseBdev2", 00:17:05.651 "uuid": "fbd61f40-a6e5-5a4f-8fc4-be48555a2ff4", 00:17:05.651 "is_configured": true, 00:17:05.651 "data_offset": 256, 00:17:05.651 "data_size": 7936 00:17:05.651 } 00:17:05.651 ] 00:17:05.651 }' 00:17:05.651 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.651 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:05.651 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.651 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:05.651 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:05.651 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:05.651 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:05.651 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:05.651 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:05.651 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:05.651 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=623 00:17:05.651 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:05.651 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:05.651 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.651 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:05.651 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:05.651 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.651 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.651 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.651 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.651 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.910 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.910 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.910 "name": "raid_bdev1", 00:17:05.910 "uuid": "ddf8e9fc-6b33-47b0-a364-71daa34124f4", 00:17:05.910 "strip_size_kb": 0, 00:17:05.910 "state": "online", 00:17:05.910 "raid_level": "raid1", 00:17:05.910 "superblock": true, 00:17:05.910 "num_base_bdevs": 2, 00:17:05.910 "num_base_bdevs_discovered": 2, 00:17:05.910 "num_base_bdevs_operational": 2, 00:17:05.910 "process": { 00:17:05.910 "type": "rebuild", 00:17:05.910 "target": "spare", 00:17:05.910 "progress": { 00:17:05.910 "blocks": 2816, 00:17:05.910 "percent": 35 00:17:05.910 } 00:17:05.910 }, 00:17:05.910 "base_bdevs_list": [ 00:17:05.910 { 00:17:05.910 "name": "spare", 00:17:05.910 "uuid": "2c85587a-03f9-5c52-bf2b-f58a67febfc0", 00:17:05.910 "is_configured": true, 00:17:05.910 "data_offset": 256, 00:17:05.910 "data_size": 7936 00:17:05.910 }, 00:17:05.910 { 00:17:05.910 "name": "BaseBdev2", 00:17:05.910 "uuid": "fbd61f40-a6e5-5a4f-8fc4-be48555a2ff4", 00:17:05.910 "is_configured": true, 00:17:05.910 "data_offset": 256, 00:17:05.910 "data_size": 7936 00:17:05.910 } 00:17:05.910 ] 00:17:05.910 }' 00:17:05.910 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.910 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:05.910 12:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.910 12:37:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:05.910 12:37:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:06.847 12:37:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:06.847 12:37:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.847 12:37:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.847 12:37:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.847 12:37:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.847 12:37:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.847 12:37:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.847 12:37:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.847 12:37:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.847 12:37:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.847 12:37:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.847 12:37:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.847 "name": "raid_bdev1", 00:17:06.847 "uuid": "ddf8e9fc-6b33-47b0-a364-71daa34124f4", 00:17:06.847 "strip_size_kb": 0, 00:17:06.848 "state": "online", 00:17:06.848 "raid_level": "raid1", 00:17:06.848 "superblock": true, 00:17:06.848 "num_base_bdevs": 2, 00:17:06.848 "num_base_bdevs_discovered": 2, 00:17:06.848 "num_base_bdevs_operational": 2, 00:17:06.848 "process": { 00:17:06.848 "type": "rebuild", 00:17:06.848 "target": "spare", 00:17:06.848 "progress": { 00:17:06.848 "blocks": 5632, 00:17:06.848 "percent": 70 00:17:06.848 } 00:17:06.848 }, 00:17:06.848 "base_bdevs_list": [ 00:17:06.848 { 00:17:06.848 "name": "spare", 00:17:06.848 "uuid": "2c85587a-03f9-5c52-bf2b-f58a67febfc0", 00:17:06.848 "is_configured": true, 00:17:06.848 "data_offset": 256, 00:17:06.848 "data_size": 7936 00:17:06.848 }, 00:17:06.848 { 00:17:06.848 "name": "BaseBdev2", 00:17:06.848 "uuid": "fbd61f40-a6e5-5a4f-8fc4-be48555a2ff4", 00:17:06.848 "is_configured": true, 00:17:06.848 "data_offset": 256, 00:17:06.848 "data_size": 7936 00:17:06.848 } 00:17:06.848 ] 00:17:06.848 }' 00:17:06.848 12:37:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.848 12:37:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:06.848 12:37:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.106 12:37:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:07.106 12:37:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:07.675 [2024-11-19 12:37:12.869855] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:07.675 [2024-11-19 12:37:12.869960] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:07.675 [2024-11-19 12:37:12.870106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.934 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:07.934 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:07.934 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.934 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:07.934 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:07.934 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:07.934 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.934 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.934 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.934 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:07.934 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.934 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.934 "name": "raid_bdev1", 00:17:07.934 "uuid": "ddf8e9fc-6b33-47b0-a364-71daa34124f4", 00:17:07.934 "strip_size_kb": 0, 00:17:07.934 "state": "online", 00:17:07.934 "raid_level": "raid1", 00:17:07.934 "superblock": true, 00:17:07.934 "num_base_bdevs": 2, 00:17:07.934 "num_base_bdevs_discovered": 2, 00:17:07.934 "num_base_bdevs_operational": 2, 00:17:07.934 "base_bdevs_list": [ 00:17:07.934 { 00:17:07.934 "name": "spare", 00:17:07.934 "uuid": "2c85587a-03f9-5c52-bf2b-f58a67febfc0", 00:17:07.934 "is_configured": true, 00:17:07.934 "data_offset": 256, 00:17:07.934 "data_size": 7936 00:17:07.934 }, 00:17:07.934 { 00:17:07.934 "name": "BaseBdev2", 00:17:07.934 "uuid": "fbd61f40-a6e5-5a4f-8fc4-be48555a2ff4", 00:17:07.934 "is_configured": true, 00:17:07.934 "data_offset": 256, 00:17:07.934 "data_size": 7936 00:17:07.934 } 00:17:07.934 ] 00:17:07.934 }' 00:17:07.934 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.194 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:08.194 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.194 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:08.194 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:17:08.194 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:08.194 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.194 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:08.194 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:08.194 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.194 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.194 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.194 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.194 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.194 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.194 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.194 "name": "raid_bdev1", 00:17:08.194 "uuid": "ddf8e9fc-6b33-47b0-a364-71daa34124f4", 00:17:08.194 "strip_size_kb": 0, 00:17:08.194 "state": "online", 00:17:08.194 "raid_level": "raid1", 00:17:08.194 "superblock": true, 00:17:08.194 "num_base_bdevs": 2, 00:17:08.194 "num_base_bdevs_discovered": 2, 00:17:08.194 "num_base_bdevs_operational": 2, 00:17:08.194 "base_bdevs_list": [ 00:17:08.194 { 00:17:08.194 "name": "spare", 00:17:08.194 "uuid": "2c85587a-03f9-5c52-bf2b-f58a67febfc0", 00:17:08.194 "is_configured": true, 00:17:08.194 "data_offset": 256, 00:17:08.194 "data_size": 7936 00:17:08.194 }, 00:17:08.194 { 00:17:08.194 "name": "BaseBdev2", 00:17:08.194 "uuid": "fbd61f40-a6e5-5a4f-8fc4-be48555a2ff4", 00:17:08.194 "is_configured": true, 00:17:08.194 "data_offset": 256, 00:17:08.194 "data_size": 7936 00:17:08.194 } 00:17:08.194 ] 00:17:08.194 }' 00:17:08.194 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.194 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:08.194 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.194 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:08.194 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:08.194 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.194 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.195 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.195 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.195 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:08.195 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.195 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.195 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.195 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.195 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.195 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.195 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.195 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.454 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.454 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.454 "name": "raid_bdev1", 00:17:08.454 "uuid": "ddf8e9fc-6b33-47b0-a364-71daa34124f4", 00:17:08.454 "strip_size_kb": 0, 00:17:08.454 "state": "online", 00:17:08.454 "raid_level": "raid1", 00:17:08.454 "superblock": true, 00:17:08.454 "num_base_bdevs": 2, 00:17:08.454 "num_base_bdevs_discovered": 2, 00:17:08.454 "num_base_bdevs_operational": 2, 00:17:08.454 "base_bdevs_list": [ 00:17:08.454 { 00:17:08.454 "name": "spare", 00:17:08.454 "uuid": "2c85587a-03f9-5c52-bf2b-f58a67febfc0", 00:17:08.454 "is_configured": true, 00:17:08.454 "data_offset": 256, 00:17:08.454 "data_size": 7936 00:17:08.454 }, 00:17:08.454 { 00:17:08.454 "name": "BaseBdev2", 00:17:08.454 "uuid": "fbd61f40-a6e5-5a4f-8fc4-be48555a2ff4", 00:17:08.454 "is_configured": true, 00:17:08.454 "data_offset": 256, 00:17:08.454 "data_size": 7936 00:17:08.454 } 00:17:08.454 ] 00:17:08.454 }' 00:17:08.454 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.454 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.713 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:08.713 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.713 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.713 [2024-11-19 12:37:13.876128] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:08.713 [2024-11-19 12:37:13.876164] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:08.713 [2024-11-19 12:37:13.876276] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:08.713 [2024-11-19 12:37:13.876348] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:08.713 [2024-11-19 12:37:13.876368] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:17:08.713 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.713 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.713 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:17:08.713 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.713 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.713 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.713 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:08.713 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:17:08.713 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:08.713 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:08.713 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.713 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.713 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.713 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:08.713 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.713 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.713 [2024-11-19 12:37:13.948008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:08.713 [2024-11-19 12:37:13.948101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.713 [2024-11-19 12:37:13.948127] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:08.713 [2024-11-19 12:37:13.948138] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.713 [2024-11-19 12:37:13.950127] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.713 [2024-11-19 12:37:13.950185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:08.713 [2024-11-19 12:37:13.950253] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:08.713 [2024-11-19 12:37:13.950311] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:08.713 [2024-11-19 12:37:13.950417] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:08.713 spare 00:17:08.714 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.714 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:08.714 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.714 12:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.973 [2024-11-19 12:37:14.050334] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:17:08.973 [2024-11-19 12:37:14.050479] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:08.973 [2024-11-19 12:37:14.050670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:08.973 [2024-11-19 12:37:14.050891] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:17:08.973 [2024-11-19 12:37:14.050936] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:17:08.973 [2024-11-19 12:37:14.051100] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.973 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.973 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:08.973 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.973 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.973 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.973 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.973 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:08.973 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.973 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.973 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.973 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.973 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.973 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.973 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.973 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.973 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.973 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.973 "name": "raid_bdev1", 00:17:08.973 "uuid": "ddf8e9fc-6b33-47b0-a364-71daa34124f4", 00:17:08.973 "strip_size_kb": 0, 00:17:08.973 "state": "online", 00:17:08.973 "raid_level": "raid1", 00:17:08.973 "superblock": true, 00:17:08.973 "num_base_bdevs": 2, 00:17:08.973 "num_base_bdevs_discovered": 2, 00:17:08.973 "num_base_bdevs_operational": 2, 00:17:08.973 "base_bdevs_list": [ 00:17:08.973 { 00:17:08.973 "name": "spare", 00:17:08.974 "uuid": "2c85587a-03f9-5c52-bf2b-f58a67febfc0", 00:17:08.974 "is_configured": true, 00:17:08.974 "data_offset": 256, 00:17:08.974 "data_size": 7936 00:17:08.974 }, 00:17:08.974 { 00:17:08.974 "name": "BaseBdev2", 00:17:08.974 "uuid": "fbd61f40-a6e5-5a4f-8fc4-be48555a2ff4", 00:17:08.974 "is_configured": true, 00:17:08.974 "data_offset": 256, 00:17:08.974 "data_size": 7936 00:17:08.974 } 00:17:08.974 ] 00:17:08.974 }' 00:17:08.974 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.974 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:09.543 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:09.543 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.543 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:09.543 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:09.543 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.543 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.543 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.543 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.543 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:09.543 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.543 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.543 "name": "raid_bdev1", 00:17:09.543 "uuid": "ddf8e9fc-6b33-47b0-a364-71daa34124f4", 00:17:09.543 "strip_size_kb": 0, 00:17:09.543 "state": "online", 00:17:09.543 "raid_level": "raid1", 00:17:09.543 "superblock": true, 00:17:09.543 "num_base_bdevs": 2, 00:17:09.543 "num_base_bdevs_discovered": 2, 00:17:09.543 "num_base_bdevs_operational": 2, 00:17:09.543 "base_bdevs_list": [ 00:17:09.543 { 00:17:09.543 "name": "spare", 00:17:09.543 "uuid": "2c85587a-03f9-5c52-bf2b-f58a67febfc0", 00:17:09.543 "is_configured": true, 00:17:09.543 "data_offset": 256, 00:17:09.543 "data_size": 7936 00:17:09.543 }, 00:17:09.543 { 00:17:09.543 "name": "BaseBdev2", 00:17:09.543 "uuid": "fbd61f40-a6e5-5a4f-8fc4-be48555a2ff4", 00:17:09.543 "is_configured": true, 00:17:09.543 "data_offset": 256, 00:17:09.543 "data_size": 7936 00:17:09.543 } 00:17:09.543 ] 00:17:09.543 }' 00:17:09.543 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.543 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:09.543 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.543 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:09.543 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:09.543 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.543 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.543 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:09.543 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.543 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:09.543 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:09.543 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.543 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:09.543 [2024-11-19 12:37:14.694857] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:09.543 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.543 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:09.544 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.544 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.544 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:09.544 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:09.544 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:09.544 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.544 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.544 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.544 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.544 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.544 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.544 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:09.544 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.544 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.544 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.544 "name": "raid_bdev1", 00:17:09.544 "uuid": "ddf8e9fc-6b33-47b0-a364-71daa34124f4", 00:17:09.544 "strip_size_kb": 0, 00:17:09.544 "state": "online", 00:17:09.544 "raid_level": "raid1", 00:17:09.544 "superblock": true, 00:17:09.544 "num_base_bdevs": 2, 00:17:09.544 "num_base_bdevs_discovered": 1, 00:17:09.544 "num_base_bdevs_operational": 1, 00:17:09.544 "base_bdevs_list": [ 00:17:09.544 { 00:17:09.544 "name": null, 00:17:09.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.544 "is_configured": false, 00:17:09.544 "data_offset": 0, 00:17:09.544 "data_size": 7936 00:17:09.544 }, 00:17:09.544 { 00:17:09.544 "name": "BaseBdev2", 00:17:09.544 "uuid": "fbd61f40-a6e5-5a4f-8fc4-be48555a2ff4", 00:17:09.544 "is_configured": true, 00:17:09.544 "data_offset": 256, 00:17:09.544 "data_size": 7936 00:17:09.544 } 00:17:09.544 ] 00:17:09.544 }' 00:17:09.544 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.544 12:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:10.112 12:37:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:10.112 12:37:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.112 12:37:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:10.112 [2024-11-19 12:37:15.138121] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:10.112 [2024-11-19 12:37:15.138409] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:10.112 [2024-11-19 12:37:15.138469] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:10.112 [2024-11-19 12:37:15.138550] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:10.112 [2024-11-19 12:37:15.141389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:10.112 [2024-11-19 12:37:15.143329] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:10.112 12:37:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.112 12:37:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:11.051 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:11.051 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.051 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:11.051 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:11.051 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.051 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.051 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.051 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.051 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:11.051 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.051 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.051 "name": "raid_bdev1", 00:17:11.051 "uuid": "ddf8e9fc-6b33-47b0-a364-71daa34124f4", 00:17:11.051 "strip_size_kb": 0, 00:17:11.051 "state": "online", 00:17:11.051 "raid_level": "raid1", 00:17:11.051 "superblock": true, 00:17:11.051 "num_base_bdevs": 2, 00:17:11.051 "num_base_bdevs_discovered": 2, 00:17:11.051 "num_base_bdevs_operational": 2, 00:17:11.051 "process": { 00:17:11.051 "type": "rebuild", 00:17:11.051 "target": "spare", 00:17:11.051 "progress": { 00:17:11.051 "blocks": 2560, 00:17:11.051 "percent": 32 00:17:11.051 } 00:17:11.051 }, 00:17:11.051 "base_bdevs_list": [ 00:17:11.051 { 00:17:11.051 "name": "spare", 00:17:11.051 "uuid": "2c85587a-03f9-5c52-bf2b-f58a67febfc0", 00:17:11.051 "is_configured": true, 00:17:11.051 "data_offset": 256, 00:17:11.051 "data_size": 7936 00:17:11.051 }, 00:17:11.051 { 00:17:11.051 "name": "BaseBdev2", 00:17:11.051 "uuid": "fbd61f40-a6e5-5a4f-8fc4-be48555a2ff4", 00:17:11.051 "is_configured": true, 00:17:11.051 "data_offset": 256, 00:17:11.051 "data_size": 7936 00:17:11.051 } 00:17:11.051 ] 00:17:11.051 }' 00:17:11.051 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.051 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:11.051 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.051 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:11.051 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:11.051 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.051 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:11.310 [2024-11-19 12:37:16.310598] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:11.310 [2024-11-19 12:37:16.348597] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:11.310 [2024-11-19 12:37:16.348778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.310 [2024-11-19 12:37:16.348820] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:11.310 [2024-11-19 12:37:16.348841] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:11.310 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.310 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:11.310 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.310 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.310 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:11.310 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:11.310 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:11.310 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.310 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.310 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.310 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.310 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.310 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.310 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.310 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:11.310 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.310 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.310 "name": "raid_bdev1", 00:17:11.310 "uuid": "ddf8e9fc-6b33-47b0-a364-71daa34124f4", 00:17:11.310 "strip_size_kb": 0, 00:17:11.310 "state": "online", 00:17:11.310 "raid_level": "raid1", 00:17:11.310 "superblock": true, 00:17:11.310 "num_base_bdevs": 2, 00:17:11.310 "num_base_bdevs_discovered": 1, 00:17:11.310 "num_base_bdevs_operational": 1, 00:17:11.310 "base_bdevs_list": [ 00:17:11.310 { 00:17:11.310 "name": null, 00:17:11.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.310 "is_configured": false, 00:17:11.310 "data_offset": 0, 00:17:11.310 "data_size": 7936 00:17:11.310 }, 00:17:11.310 { 00:17:11.310 "name": "BaseBdev2", 00:17:11.310 "uuid": "fbd61f40-a6e5-5a4f-8fc4-be48555a2ff4", 00:17:11.310 "is_configured": true, 00:17:11.310 "data_offset": 256, 00:17:11.310 "data_size": 7936 00:17:11.310 } 00:17:11.310 ] 00:17:11.310 }' 00:17:11.310 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.310 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:11.570 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:11.570 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.570 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:11.570 [2024-11-19 12:37:16.791947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:11.570 [2024-11-19 12:37:16.792080] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.570 [2024-11-19 12:37:16.792126] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:11.570 [2024-11-19 12:37:16.792155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.570 [2024-11-19 12:37:16.792400] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.570 [2024-11-19 12:37:16.792450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:11.570 [2024-11-19 12:37:16.792538] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:11.570 [2024-11-19 12:37:16.792575] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:11.570 [2024-11-19 12:37:16.792638] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:11.570 [2024-11-19 12:37:16.792701] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:11.570 [2024-11-19 12:37:16.795475] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:11.570 [2024-11-19 12:37:16.797422] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:11.570 spare 00:17:11.570 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.570 12:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:12.957 12:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.957 12:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.957 12:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.957 12:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.957 12:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.957 12:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.957 12:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.957 12:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.957 12:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:12.957 12:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.957 12:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.957 "name": "raid_bdev1", 00:17:12.957 "uuid": "ddf8e9fc-6b33-47b0-a364-71daa34124f4", 00:17:12.957 "strip_size_kb": 0, 00:17:12.957 "state": "online", 00:17:12.957 "raid_level": "raid1", 00:17:12.957 "superblock": true, 00:17:12.957 "num_base_bdevs": 2, 00:17:12.957 "num_base_bdevs_discovered": 2, 00:17:12.957 "num_base_bdevs_operational": 2, 00:17:12.957 "process": { 00:17:12.957 "type": "rebuild", 00:17:12.957 "target": "spare", 00:17:12.957 "progress": { 00:17:12.957 "blocks": 2560, 00:17:12.957 "percent": 32 00:17:12.957 } 00:17:12.957 }, 00:17:12.957 "base_bdevs_list": [ 00:17:12.957 { 00:17:12.957 "name": "spare", 00:17:12.957 "uuid": "2c85587a-03f9-5c52-bf2b-f58a67febfc0", 00:17:12.957 "is_configured": true, 00:17:12.957 "data_offset": 256, 00:17:12.957 "data_size": 7936 00:17:12.957 }, 00:17:12.957 { 00:17:12.957 "name": "BaseBdev2", 00:17:12.957 "uuid": "fbd61f40-a6e5-5a4f-8fc4-be48555a2ff4", 00:17:12.957 "is_configured": true, 00:17:12.957 "data_offset": 256, 00:17:12.957 "data_size": 7936 00:17:12.957 } 00:17:12.957 ] 00:17:12.957 }' 00:17:12.957 12:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.957 12:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.957 12:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.957 12:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.957 12:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:12.957 12:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.957 12:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:12.957 [2024-11-19 12:37:17.948570] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:12.957 [2024-11-19 12:37:18.002557] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:12.957 [2024-11-19 12:37:18.002785] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:12.957 [2024-11-19 12:37:18.002829] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:12.957 [2024-11-19 12:37:18.002856] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:12.957 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.957 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:12.957 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:12.958 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:12.958 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:12.958 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:12.958 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:12.958 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.958 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.958 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.958 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.958 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.958 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.958 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.958 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:12.958 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.958 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.958 "name": "raid_bdev1", 00:17:12.958 "uuid": "ddf8e9fc-6b33-47b0-a364-71daa34124f4", 00:17:12.958 "strip_size_kb": 0, 00:17:12.958 "state": "online", 00:17:12.958 "raid_level": "raid1", 00:17:12.958 "superblock": true, 00:17:12.958 "num_base_bdevs": 2, 00:17:12.958 "num_base_bdevs_discovered": 1, 00:17:12.958 "num_base_bdevs_operational": 1, 00:17:12.958 "base_bdevs_list": [ 00:17:12.958 { 00:17:12.958 "name": null, 00:17:12.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.958 "is_configured": false, 00:17:12.958 "data_offset": 0, 00:17:12.958 "data_size": 7936 00:17:12.958 }, 00:17:12.958 { 00:17:12.958 "name": "BaseBdev2", 00:17:12.958 "uuid": "fbd61f40-a6e5-5a4f-8fc4-be48555a2ff4", 00:17:12.958 "is_configured": true, 00:17:12.958 "data_offset": 256, 00:17:12.958 "data_size": 7936 00:17:12.958 } 00:17:12.958 ] 00:17:12.958 }' 00:17:12.958 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.958 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:13.234 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:13.234 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.234 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:13.234 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:13.234 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.234 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.234 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.234 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.234 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:13.234 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.234 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.234 "name": "raid_bdev1", 00:17:13.234 "uuid": "ddf8e9fc-6b33-47b0-a364-71daa34124f4", 00:17:13.234 "strip_size_kb": 0, 00:17:13.234 "state": "online", 00:17:13.234 "raid_level": "raid1", 00:17:13.234 "superblock": true, 00:17:13.234 "num_base_bdevs": 2, 00:17:13.234 "num_base_bdevs_discovered": 1, 00:17:13.234 "num_base_bdevs_operational": 1, 00:17:13.234 "base_bdevs_list": [ 00:17:13.234 { 00:17:13.234 "name": null, 00:17:13.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.234 "is_configured": false, 00:17:13.234 "data_offset": 0, 00:17:13.234 "data_size": 7936 00:17:13.234 }, 00:17:13.234 { 00:17:13.234 "name": "BaseBdev2", 00:17:13.234 "uuid": "fbd61f40-a6e5-5a4f-8fc4-be48555a2ff4", 00:17:13.234 "is_configured": true, 00:17:13.234 "data_offset": 256, 00:17:13.234 "data_size": 7936 00:17:13.234 } 00:17:13.234 ] 00:17:13.234 }' 00:17:13.234 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.494 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:13.494 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.494 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:13.494 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:13.494 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.494 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:13.494 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.494 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:13.494 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.494 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:13.494 [2024-11-19 12:37:18.569708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:13.494 [2024-11-19 12:37:18.569799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.494 [2024-11-19 12:37:18.569825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:13.494 [2024-11-19 12:37:18.569837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.494 [2024-11-19 12:37:18.570012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.494 [2024-11-19 12:37:18.570034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:13.494 [2024-11-19 12:37:18.570092] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:13.494 [2024-11-19 12:37:18.570126] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:13.494 [2024-11-19 12:37:18.570134] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:13.494 [2024-11-19 12:37:18.570150] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:13.494 BaseBdev1 00:17:13.494 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.494 12:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:14.431 12:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:14.431 12:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.431 12:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.431 12:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:14.431 12:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:14.431 12:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:14.431 12:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.431 12:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.431 12:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.431 12:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.431 12:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.431 12:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.431 12:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.431 12:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:14.431 12:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.431 12:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.431 "name": "raid_bdev1", 00:17:14.431 "uuid": "ddf8e9fc-6b33-47b0-a364-71daa34124f4", 00:17:14.431 "strip_size_kb": 0, 00:17:14.431 "state": "online", 00:17:14.431 "raid_level": "raid1", 00:17:14.431 "superblock": true, 00:17:14.431 "num_base_bdevs": 2, 00:17:14.431 "num_base_bdevs_discovered": 1, 00:17:14.431 "num_base_bdevs_operational": 1, 00:17:14.431 "base_bdevs_list": [ 00:17:14.431 { 00:17:14.431 "name": null, 00:17:14.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.431 "is_configured": false, 00:17:14.431 "data_offset": 0, 00:17:14.431 "data_size": 7936 00:17:14.431 }, 00:17:14.431 { 00:17:14.431 "name": "BaseBdev2", 00:17:14.431 "uuid": "fbd61f40-a6e5-5a4f-8fc4-be48555a2ff4", 00:17:14.431 "is_configured": true, 00:17:14.431 "data_offset": 256, 00:17:14.431 "data_size": 7936 00:17:14.431 } 00:17:14.431 ] 00:17:14.431 }' 00:17:14.431 12:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.431 12:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:14.999 12:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:14.999 12:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.999 12:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:14.999 12:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:14.999 12:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.999 12:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.999 12:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.999 12:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.999 12:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:14.999 12:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.999 12:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.999 "name": "raid_bdev1", 00:17:14.999 "uuid": "ddf8e9fc-6b33-47b0-a364-71daa34124f4", 00:17:14.999 "strip_size_kb": 0, 00:17:14.999 "state": "online", 00:17:14.999 "raid_level": "raid1", 00:17:14.999 "superblock": true, 00:17:14.999 "num_base_bdevs": 2, 00:17:14.999 "num_base_bdevs_discovered": 1, 00:17:14.999 "num_base_bdevs_operational": 1, 00:17:14.999 "base_bdevs_list": [ 00:17:14.999 { 00:17:14.999 "name": null, 00:17:14.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.999 "is_configured": false, 00:17:14.999 "data_offset": 0, 00:17:14.999 "data_size": 7936 00:17:14.999 }, 00:17:14.999 { 00:17:14.999 "name": "BaseBdev2", 00:17:14.999 "uuid": "fbd61f40-a6e5-5a4f-8fc4-be48555a2ff4", 00:17:14.999 "is_configured": true, 00:17:14.999 "data_offset": 256, 00:17:14.999 "data_size": 7936 00:17:14.999 } 00:17:14.999 ] 00:17:14.999 }' 00:17:14.999 12:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.999 12:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:14.999 12:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.999 12:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:14.999 12:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:14.999 12:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:17:14.999 12:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:14.999 12:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:14.999 12:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:14.999 12:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:14.999 12:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:14.999 12:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:14.999 12:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.999 12:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:14.999 [2024-11-19 12:37:20.187041] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:14.999 [2024-11-19 12:37:20.187228] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:14.999 [2024-11-19 12:37:20.187248] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:14.999 request: 00:17:14.999 { 00:17:14.999 "base_bdev": "BaseBdev1", 00:17:14.999 "raid_bdev": "raid_bdev1", 00:17:14.999 "method": "bdev_raid_add_base_bdev", 00:17:14.999 "req_id": 1 00:17:14.999 } 00:17:14.999 Got JSON-RPC error response 00:17:14.999 response: 00:17:14.999 { 00:17:14.999 "code": -22, 00:17:14.999 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:14.999 } 00:17:14.999 12:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:14.999 12:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:17:14.999 12:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:14.999 12:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:14.999 12:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:14.999 12:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:15.943 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:15.943 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.202 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.202 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:16.202 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:16.202 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:16.202 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.202 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.202 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.202 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.202 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.202 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.202 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.202 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:16.202 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.202 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.202 "name": "raid_bdev1", 00:17:16.202 "uuid": "ddf8e9fc-6b33-47b0-a364-71daa34124f4", 00:17:16.202 "strip_size_kb": 0, 00:17:16.202 "state": "online", 00:17:16.202 "raid_level": "raid1", 00:17:16.202 "superblock": true, 00:17:16.202 "num_base_bdevs": 2, 00:17:16.202 "num_base_bdevs_discovered": 1, 00:17:16.202 "num_base_bdevs_operational": 1, 00:17:16.202 "base_bdevs_list": [ 00:17:16.202 { 00:17:16.202 "name": null, 00:17:16.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.202 "is_configured": false, 00:17:16.202 "data_offset": 0, 00:17:16.202 "data_size": 7936 00:17:16.202 }, 00:17:16.202 { 00:17:16.202 "name": "BaseBdev2", 00:17:16.202 "uuid": "fbd61f40-a6e5-5a4f-8fc4-be48555a2ff4", 00:17:16.202 "is_configured": true, 00:17:16.202 "data_offset": 256, 00:17:16.202 "data_size": 7936 00:17:16.202 } 00:17:16.202 ] 00:17:16.202 }' 00:17:16.202 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.202 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:16.462 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:16.462 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.462 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:16.462 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:16.462 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.462 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.462 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.462 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.462 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:16.462 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.462 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.462 "name": "raid_bdev1", 00:17:16.462 "uuid": "ddf8e9fc-6b33-47b0-a364-71daa34124f4", 00:17:16.462 "strip_size_kb": 0, 00:17:16.462 "state": "online", 00:17:16.462 "raid_level": "raid1", 00:17:16.462 "superblock": true, 00:17:16.462 "num_base_bdevs": 2, 00:17:16.462 "num_base_bdevs_discovered": 1, 00:17:16.462 "num_base_bdevs_operational": 1, 00:17:16.462 "base_bdevs_list": [ 00:17:16.462 { 00:17:16.462 "name": null, 00:17:16.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.462 "is_configured": false, 00:17:16.462 "data_offset": 0, 00:17:16.462 "data_size": 7936 00:17:16.462 }, 00:17:16.462 { 00:17:16.462 "name": "BaseBdev2", 00:17:16.462 "uuid": "fbd61f40-a6e5-5a4f-8fc4-be48555a2ff4", 00:17:16.462 "is_configured": true, 00:17:16.462 "data_offset": 256, 00:17:16.462 "data_size": 7936 00:17:16.462 } 00:17:16.462 ] 00:17:16.462 }' 00:17:16.462 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.722 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:16.722 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.722 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:16.722 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 99543 00:17:16.722 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99543 ']' 00:17:16.722 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99543 00:17:16.722 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:17:16.722 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:16.722 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99543 00:17:16.722 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:16.722 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:16.722 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99543' 00:17:16.722 killing process with pid 99543 00:17:16.722 Received shutdown signal, test time was about 60.000000 seconds 00:17:16.722 00:17:16.722 Latency(us) 00:17:16.722 [2024-11-19T12:37:21.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.722 [2024-11-19T12:37:21.983Z] =================================================================================================================== 00:17:16.722 [2024-11-19T12:37:21.983Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:16.722 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 99543 00:17:16.722 [2024-11-19 12:37:21.838269] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:16.722 [2024-11-19 12:37:21.838411] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:16.722 12:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 99543 00:17:16.722 [2024-11-19 12:37:21.838464] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:16.722 [2024-11-19 12:37:21.838474] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:17:16.722 [2024-11-19 12:37:21.872376] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:16.981 12:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:17:16.981 00:17:16.981 real 0m16.107s 00:17:16.981 user 0m21.422s 00:17:16.981 sys 0m1.781s 00:17:16.981 12:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:16.981 12:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:16.981 ************************************ 00:17:16.981 END TEST raid_rebuild_test_sb_md_interleaved 00:17:16.981 ************************************ 00:17:16.981 12:37:22 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:17:16.981 12:37:22 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:17:16.981 12:37:22 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 99543 ']' 00:17:16.981 12:37:22 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 99543 00:17:16.981 12:37:22 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:17:16.981 00:17:16.981 real 10m4.756s 00:17:16.981 user 14m13.166s 00:17:16.981 sys 1m55.578s 00:17:16.981 12:37:22 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:16.981 12:37:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:16.981 ************************************ 00:17:16.981 END TEST bdev_raid 00:17:16.981 ************************************ 00:17:17.240 12:37:22 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:17.240 12:37:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:17.240 12:37:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:17.240 12:37:22 -- common/autotest_common.sh@10 -- # set +x 00:17:17.240 ************************************ 00:17:17.240 START TEST spdkcli_raid 00:17:17.240 ************************************ 00:17:17.240 12:37:22 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:17.240 * Looking for test storage... 00:17:17.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:17.240 12:37:22 spdkcli_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:17.240 12:37:22 spdkcli_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:17:17.240 12:37:22 spdkcli_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:17.240 12:37:22 spdkcli_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:17.240 12:37:22 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:17.240 12:37:22 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:17.240 12:37:22 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:17.240 12:37:22 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:17:17.240 12:37:22 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:17:17.240 12:37:22 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:17:17.240 12:37:22 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:17:17.240 12:37:22 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:17:17.240 12:37:22 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:17:17.240 12:37:22 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:17:17.241 12:37:22 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:17.241 12:37:22 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:17:17.241 12:37:22 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:17:17.241 12:37:22 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:17.241 12:37:22 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:17.241 12:37:22 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:17:17.241 12:37:22 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:17:17.241 12:37:22 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:17.500 12:37:22 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:17:17.500 12:37:22 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:17.500 12:37:22 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:17:17.500 12:37:22 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:17:17.500 12:37:22 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:17.500 12:37:22 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:17:17.500 12:37:22 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:17.500 12:37:22 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:17.500 12:37:22 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:17.500 12:37:22 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:17:17.500 12:37:22 spdkcli_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:17.500 12:37:22 spdkcli_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:17.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.500 --rc genhtml_branch_coverage=1 00:17:17.500 --rc genhtml_function_coverage=1 00:17:17.500 --rc genhtml_legend=1 00:17:17.500 --rc geninfo_all_blocks=1 00:17:17.500 --rc geninfo_unexecuted_blocks=1 00:17:17.500 00:17:17.500 ' 00:17:17.500 12:37:22 spdkcli_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:17.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.500 --rc genhtml_branch_coverage=1 00:17:17.500 --rc genhtml_function_coverage=1 00:17:17.500 --rc genhtml_legend=1 00:17:17.500 --rc geninfo_all_blocks=1 00:17:17.500 --rc geninfo_unexecuted_blocks=1 00:17:17.500 00:17:17.500 ' 00:17:17.500 12:37:22 spdkcli_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:17.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.500 --rc genhtml_branch_coverage=1 00:17:17.500 --rc genhtml_function_coverage=1 00:17:17.500 --rc genhtml_legend=1 00:17:17.500 --rc geninfo_all_blocks=1 00:17:17.500 --rc geninfo_unexecuted_blocks=1 00:17:17.500 00:17:17.500 ' 00:17:17.500 12:37:22 spdkcli_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:17.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.500 --rc genhtml_branch_coverage=1 00:17:17.500 --rc genhtml_function_coverage=1 00:17:17.500 --rc genhtml_legend=1 00:17:17.500 --rc geninfo_all_blocks=1 00:17:17.500 --rc geninfo_unexecuted_blocks=1 00:17:17.500 00:17:17.500 ' 00:17:17.500 12:37:22 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:17.500 12:37:22 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:17.500 12:37:22 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:17.500 12:37:22 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:17:17.500 12:37:22 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:17:17.500 12:37:22 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:17:17.500 12:37:22 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:17:17.500 12:37:22 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:17:17.500 12:37:22 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:17:17.500 12:37:22 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:17:17.500 12:37:22 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:17:17.500 12:37:22 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:17:17.500 12:37:22 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:17:17.500 12:37:22 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:17:17.500 12:37:22 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:17:17.500 12:37:22 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:17:17.500 12:37:22 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:17:17.500 12:37:22 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:17:17.500 12:37:22 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:17:17.500 12:37:22 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:17:17.500 12:37:22 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:17:17.500 12:37:22 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:17:17.500 12:37:22 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:17:17.500 12:37:22 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:17:17.500 12:37:22 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:17:17.500 12:37:22 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:17.500 12:37:22 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:17.500 12:37:22 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:17.500 12:37:22 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:17.500 12:37:22 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:17.500 12:37:22 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:17.500 12:37:22 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:17:17.500 12:37:22 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:17:17.500 12:37:22 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:17.500 12:37:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:17.500 12:37:22 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:17:17.500 12:37:22 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=100207 00:17:17.500 12:37:22 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:17:17.500 12:37:22 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 100207 00:17:17.500 12:37:22 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 100207 ']' 00:17:17.500 12:37:22 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.500 12:37:22 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:17.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.500 12:37:22 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.500 12:37:22 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:17.500 12:37:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:17.500 [2024-11-19 12:37:22.650439] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:17.500 [2024-11-19 12:37:22.650600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100207 ] 00:17:17.759 [2024-11-19 12:37:22.818033] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:17.759 [2024-11-19 12:37:22.872379] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.760 [2024-11-19 12:37:22.872491] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.329 12:37:23 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:18.329 12:37:23 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:17:18.329 12:37:23 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:17:18.329 12:37:23 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:18.329 12:37:23 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:18.329 12:37:23 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:17:18.329 12:37:23 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:18.329 12:37:23 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:18.329 12:37:23 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:17:18.329 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:17:18.329 ' 00:17:20.236 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:17:20.236 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:17:20.236 12:37:25 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:17:20.236 12:37:25 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:20.236 12:37:25 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:20.236 12:37:25 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:17:20.236 12:37:25 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:20.236 12:37:25 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:20.236 12:37:25 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:17:20.236 ' 00:17:21.172 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:17:21.172 12:37:26 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:17:21.172 12:37:26 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:21.172 12:37:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:21.430 12:37:26 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:17:21.430 12:37:26 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:21.430 12:37:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:21.430 12:37:26 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:17:21.430 12:37:26 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:17:21.998 12:37:26 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:17:21.998 12:37:27 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:17:21.998 12:37:27 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:17:21.998 12:37:27 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:21.998 12:37:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:21.998 12:37:27 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:17:21.998 12:37:27 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:21.998 12:37:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:21.998 12:37:27 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:17:21.998 ' 00:17:22.933 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:17:23.192 12:37:28 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:17:23.192 12:37:28 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:23.192 12:37:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:23.192 12:37:28 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:17:23.192 12:37:28 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:23.192 12:37:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:23.192 12:37:28 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:17:23.192 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:17:23.192 ' 00:17:24.566 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:17:24.566 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:17:24.566 12:37:29 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:17:24.566 12:37:29 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:24.566 12:37:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:24.566 12:37:29 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 100207 00:17:24.566 12:37:29 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 100207 ']' 00:17:24.566 12:37:29 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 100207 00:17:24.566 12:37:29 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:17:24.566 12:37:29 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:24.566 12:37:29 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100207 00:17:24.566 12:37:29 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:24.566 12:37:29 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:24.566 killing process with pid 100207 00:17:24.566 12:37:29 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100207' 00:17:24.566 12:37:29 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 100207 00:17:24.566 12:37:29 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 100207 00:17:25.134 12:37:30 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:17:25.134 12:37:30 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 100207 ']' 00:17:25.134 12:37:30 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 100207 00:17:25.134 12:37:30 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 100207 ']' 00:17:25.134 12:37:30 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 100207 00:17:25.134 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (100207) - No such process 00:17:25.134 Process with pid 100207 is not found 00:17:25.134 12:37:30 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 100207 is not found' 00:17:25.134 12:37:30 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:17:25.134 12:37:30 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:17:25.134 12:37:30 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:17:25.134 12:37:30 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:17:25.134 00:17:25.135 real 0m7.955s 00:17:25.135 user 0m16.900s 00:17:25.135 sys 0m1.140s 00:17:25.135 12:37:30 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:25.135 12:37:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:25.135 ************************************ 00:17:25.135 END TEST spdkcli_raid 00:17:25.135 ************************************ 00:17:25.135 12:37:30 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:17:25.135 12:37:30 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:25.135 12:37:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:25.135 12:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:25.135 ************************************ 00:17:25.135 START TEST blockdev_raid5f 00:17:25.135 ************************************ 00:17:25.135 12:37:30 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:17:25.394 * Looking for test storage... 00:17:25.394 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:17:25.394 12:37:30 blockdev_raid5f -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:25.394 12:37:30 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lcov --version 00:17:25.394 12:37:30 blockdev_raid5f -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:25.394 12:37:30 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:25.394 12:37:30 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:25.394 12:37:30 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:25.394 12:37:30 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:25.394 12:37:30 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:17:25.394 12:37:30 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:17:25.394 12:37:30 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:17:25.394 12:37:30 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:17:25.394 12:37:30 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:17:25.394 12:37:30 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:17:25.394 12:37:30 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:17:25.394 12:37:30 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:25.394 12:37:30 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:17:25.394 12:37:30 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:17:25.394 12:37:30 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:25.394 12:37:30 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:25.394 12:37:30 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:17:25.394 12:37:30 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:17:25.394 12:37:30 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:25.394 12:37:30 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:17:25.394 12:37:30 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:17:25.394 12:37:30 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:17:25.394 12:37:30 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:17:25.394 12:37:30 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:25.394 12:37:30 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:17:25.394 12:37:30 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:17:25.394 12:37:30 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:25.394 12:37:30 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:25.394 12:37:30 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:17:25.394 12:37:30 blockdev_raid5f -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:25.394 12:37:30 blockdev_raid5f -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:25.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.394 --rc genhtml_branch_coverage=1 00:17:25.394 --rc genhtml_function_coverage=1 00:17:25.394 --rc genhtml_legend=1 00:17:25.395 --rc geninfo_all_blocks=1 00:17:25.395 --rc geninfo_unexecuted_blocks=1 00:17:25.395 00:17:25.395 ' 00:17:25.395 12:37:30 blockdev_raid5f -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:25.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.395 --rc genhtml_branch_coverage=1 00:17:25.395 --rc genhtml_function_coverage=1 00:17:25.395 --rc genhtml_legend=1 00:17:25.395 --rc geninfo_all_blocks=1 00:17:25.395 --rc geninfo_unexecuted_blocks=1 00:17:25.395 00:17:25.395 ' 00:17:25.395 12:37:30 blockdev_raid5f -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:25.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.395 --rc genhtml_branch_coverage=1 00:17:25.395 --rc genhtml_function_coverage=1 00:17:25.395 --rc genhtml_legend=1 00:17:25.395 --rc geninfo_all_blocks=1 00:17:25.395 --rc geninfo_unexecuted_blocks=1 00:17:25.395 00:17:25.395 ' 00:17:25.395 12:37:30 blockdev_raid5f -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:25.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.395 --rc genhtml_branch_coverage=1 00:17:25.395 --rc genhtml_function_coverage=1 00:17:25.395 --rc genhtml_legend=1 00:17:25.395 --rc geninfo_all_blocks=1 00:17:25.395 --rc geninfo_unexecuted_blocks=1 00:17:25.395 00:17:25.395 ' 00:17:25.395 12:37:30 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:25.395 12:37:30 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:17:25.395 12:37:30 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:17:25.395 12:37:30 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:25.395 12:37:30 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:17:25.395 12:37:30 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:17:25.395 12:37:30 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:17:25.395 12:37:30 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:17:25.395 12:37:30 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:17:25.395 12:37:30 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:17:25.395 12:37:30 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:17:25.395 12:37:30 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:17:25.395 12:37:30 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:17:25.395 12:37:30 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:17:25.395 12:37:30 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:17:25.395 12:37:30 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:17:25.395 12:37:30 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:17:25.395 12:37:30 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:17:25.395 12:37:30 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:17:25.395 12:37:30 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:17:25.395 12:37:30 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:17:25.395 12:37:30 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:17:25.395 12:37:30 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:17:25.395 12:37:30 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:17:25.395 12:37:30 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=100464 00:17:25.395 12:37:30 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:17:25.395 12:37:30 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:25.395 12:37:30 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 100464 00:17:25.395 12:37:30 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 100464 ']' 00:17:25.395 12:37:30 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.395 12:37:30 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:25.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.395 12:37:30 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.395 12:37:30 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:25.395 12:37:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:25.656 [2024-11-19 12:37:30.655448] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:25.656 [2024-11-19 12:37:30.655606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100464 ] 00:17:25.656 [2024-11-19 12:37:30.824597] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.656 [2024-11-19 12:37:30.878492] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.597 12:37:31 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:26.597 12:37:31 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:17:26.597 12:37:31 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:17:26.597 12:37:31 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:17:26.597 12:37:31 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:17:26.597 12:37:31 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.597 12:37:31 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:26.597 Malloc0 00:17:26.597 Malloc1 00:17:26.597 Malloc2 00:17:26.597 12:37:31 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.597 12:37:31 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:17:26.597 12:37:31 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.597 12:37:31 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:26.597 12:37:31 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.597 12:37:31 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:17:26.597 12:37:31 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:17:26.597 12:37:31 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.597 12:37:31 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:26.597 12:37:31 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.597 12:37:31 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:17:26.597 12:37:31 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.597 12:37:31 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:26.597 12:37:31 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.597 12:37:31 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:17:26.597 12:37:31 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.597 12:37:31 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:26.597 12:37:31 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.597 12:37:31 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:17:26.597 12:37:31 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:17:26.597 12:37:31 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:17:26.597 12:37:31 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.597 12:37:31 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:26.597 12:37:31 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.597 12:37:31 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:17:26.597 12:37:31 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:17:26.597 12:37:31 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "845d4bb3-74ba-4044-8351-e01ec07d7794"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "845d4bb3-74ba-4044-8351-e01ec07d7794",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "845d4bb3-74ba-4044-8351-e01ec07d7794",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "7402d8d7-48b0-4b28-92e5-fa066a68880b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "609f932a-7f46-47f8-b379-b52318748554",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "b9128d64-fd9c-42f7-8d4a-746846d39994",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:17:26.597 12:37:31 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:17:26.597 12:37:31 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:17:26.597 12:37:31 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:17:26.597 12:37:31 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 100464 00:17:26.597 12:37:31 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 100464 ']' 00:17:26.597 12:37:31 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 100464 00:17:26.597 12:37:31 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:17:26.597 12:37:31 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:26.597 12:37:31 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100464 00:17:26.597 12:37:31 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:26.597 12:37:31 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:26.597 killing process with pid 100464 00:17:26.597 12:37:31 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100464' 00:17:26.597 12:37:31 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 100464 00:17:26.597 12:37:31 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 100464 00:17:27.166 12:37:32 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:27.166 12:37:32 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:17:27.166 12:37:32 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:27.166 12:37:32 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:27.166 12:37:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:27.166 ************************************ 00:17:27.166 START TEST bdev_hello_world 00:17:27.166 ************************************ 00:17:27.166 12:37:32 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:17:27.166 [2024-11-19 12:37:32.306388] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:27.166 [2024-11-19 12:37:32.306517] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100509 ] 00:17:27.425 [2024-11-19 12:37:32.469310] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.425 [2024-11-19 12:37:32.522620] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.684 [2024-11-19 12:37:32.709189] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:17:27.684 [2024-11-19 12:37:32.709241] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:17:27.684 [2024-11-19 12:37:32.709258] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:17:27.684 [2024-11-19 12:37:32.709632] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:17:27.684 [2024-11-19 12:37:32.709797] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:17:27.684 [2024-11-19 12:37:32.709817] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:17:27.684 [2024-11-19 12:37:32.709887] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:17:27.684 00:17:27.684 [2024-11-19 12:37:32.709911] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:17:27.943 00:17:27.943 real 0m0.750s 00:17:27.943 user 0m0.416s 00:17:27.943 sys 0m0.218s 00:17:27.943 12:37:32 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:27.943 12:37:32 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:17:27.943 ************************************ 00:17:27.943 END TEST bdev_hello_world 00:17:27.943 ************************************ 00:17:27.943 12:37:33 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:17:27.943 12:37:33 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:27.943 12:37:33 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:27.943 12:37:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:27.943 ************************************ 00:17:27.943 START TEST bdev_bounds 00:17:27.943 ************************************ 00:17:27.943 12:37:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:17:27.943 12:37:33 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=100535 00:17:27.943 12:37:33 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:27.943 12:37:33 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:17:27.943 Process bdevio pid: 100535 00:17:27.943 12:37:33 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 100535' 00:17:27.943 12:37:33 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 100535 00:17:27.943 12:37:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 100535 ']' 00:17:27.943 12:37:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.943 12:37:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:27.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.943 12:37:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.943 12:37:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:27.943 12:37:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:27.943 [2024-11-19 12:37:33.127886] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:27.943 [2024-11-19 12:37:33.128029] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100535 ] 00:17:28.202 [2024-11-19 12:37:33.272797] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:28.202 [2024-11-19 12:37:33.328273] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.202 [2024-11-19 12:37:33.328373] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.202 [2024-11-19 12:37:33.328538] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.769 12:37:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:28.769 12:37:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:17:28.769 12:37:33 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:17:29.029 I/O targets: 00:17:29.029 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:17:29.029 00:17:29.029 00:17:29.029 CUnit - A unit testing framework for C - Version 2.1-3 00:17:29.029 http://cunit.sourceforge.net/ 00:17:29.029 00:17:29.029 00:17:29.029 Suite: bdevio tests on: raid5f 00:17:29.029 Test: blockdev write read block ...passed 00:17:29.029 Test: blockdev write zeroes read block ...passed 00:17:29.029 Test: blockdev write zeroes read no split ...passed 00:17:29.029 Test: blockdev write zeroes read split ...passed 00:17:29.029 Test: blockdev write zeroes read split partial ...passed 00:17:29.029 Test: blockdev reset ...passed 00:17:29.029 Test: blockdev write read 8 blocks ...passed 00:17:29.029 Test: blockdev write read size > 128k ...passed 00:17:29.029 Test: blockdev write read invalid size ...passed 00:17:29.029 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:29.029 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:29.029 Test: blockdev write read max offset ...passed 00:17:29.029 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:29.029 Test: blockdev writev readv 8 blocks ...passed 00:17:29.029 Test: blockdev writev readv 30 x 1block ...passed 00:17:29.029 Test: blockdev writev readv block ...passed 00:17:29.029 Test: blockdev writev readv size > 128k ...passed 00:17:29.029 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:29.029 Test: blockdev comparev and writev ...passed 00:17:29.029 Test: blockdev nvme passthru rw ...passed 00:17:29.029 Test: blockdev nvme passthru vendor specific ...passed 00:17:29.029 Test: blockdev nvme admin passthru ...passed 00:17:29.029 Test: blockdev copy ...passed 00:17:29.029 00:17:29.029 Run Summary: Type Total Ran Passed Failed Inactive 00:17:29.029 suites 1 1 n/a 0 0 00:17:29.029 tests 23 23 23 0 0 00:17:29.029 asserts 130 130 130 0 n/a 00:17:29.029 00:17:29.029 Elapsed time = 0.353 seconds 00:17:29.029 0 00:17:29.029 12:37:34 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 100535 00:17:29.029 12:37:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 100535 ']' 00:17:29.029 12:37:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 100535 00:17:29.029 12:37:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:17:29.029 12:37:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:29.029 12:37:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100535 00:17:29.288 12:37:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:29.288 12:37:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:29.288 12:37:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100535' 00:17:29.288 killing process with pid 100535 00:17:29.288 12:37:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 100535 00:17:29.288 12:37:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 100535 00:17:29.547 12:37:34 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:17:29.547 00:17:29.547 real 0m1.542s 00:17:29.547 user 0m3.733s 00:17:29.547 sys 0m0.356s 00:17:29.547 12:37:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:29.547 12:37:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:29.547 ************************************ 00:17:29.547 END TEST bdev_bounds 00:17:29.547 ************************************ 00:17:29.547 12:37:34 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:17:29.547 12:37:34 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:29.547 12:37:34 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:29.547 12:37:34 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:29.547 ************************************ 00:17:29.547 START TEST bdev_nbd 00:17:29.547 ************************************ 00:17:29.547 12:37:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:17:29.547 12:37:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:17:29.547 12:37:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:17:29.547 12:37:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:29.547 12:37:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:29.547 12:37:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:17:29.547 12:37:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:17:29.547 12:37:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:17:29.547 12:37:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:17:29.547 12:37:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:17:29.547 12:37:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:17:29.547 12:37:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:17:29.547 12:37:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:17:29.547 12:37:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:17:29.547 12:37:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:17:29.547 12:37:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:17:29.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:29.547 12:37:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=100583 00:17:29.547 12:37:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:17:29.547 12:37:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:29.547 12:37:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 100583 /var/tmp/spdk-nbd.sock 00:17:29.547 12:37:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 100583 ']' 00:17:29.547 12:37:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:29.547 12:37:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:29.547 12:37:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:29.547 12:37:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:29.547 12:37:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:29.547 [2024-11-19 12:37:34.752489] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:29.547 [2024-11-19 12:37:34.752724] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.808 [2024-11-19 12:37:34.915473] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.808 [2024-11-19 12:37:34.969646] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.377 12:37:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:30.377 12:37:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:17:30.377 12:37:35 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:17:30.377 12:37:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:30.377 12:37:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:17:30.377 12:37:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:17:30.377 12:37:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:17:30.377 12:37:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:30.377 12:37:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:17:30.377 12:37:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:17:30.377 12:37:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:17:30.377 12:37:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:17:30.377 12:37:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:17:30.377 12:37:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:17:30.377 12:37:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:17:30.637 12:37:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:17:30.637 12:37:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:17:30.637 12:37:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:17:30.637 12:37:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:30.637 12:37:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:30.637 12:37:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:30.637 12:37:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:30.637 12:37:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:30.637 12:37:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:30.637 12:37:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:30.637 12:37:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:30.637 12:37:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:30.637 1+0 records in 00:17:30.637 1+0 records out 00:17:30.637 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426206 s, 9.6 MB/s 00:17:30.637 12:37:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:30.637 12:37:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:30.637 12:37:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:30.637 12:37:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:30.637 12:37:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:30.637 12:37:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:30.637 12:37:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:17:30.637 12:37:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:30.896 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:17:30.896 { 00:17:30.896 "nbd_device": "/dev/nbd0", 00:17:30.896 "bdev_name": "raid5f" 00:17:30.896 } 00:17:30.896 ]' 00:17:30.896 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:17:30.896 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:17:30.896 { 00:17:30.896 "nbd_device": "/dev/nbd0", 00:17:30.896 "bdev_name": "raid5f" 00:17:30.896 } 00:17:30.896 ]' 00:17:30.896 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:17:30.896 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:30.896 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:30.896 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:30.897 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:30.897 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:30.897 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:30.897 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:31.156 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:31.156 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:31.156 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:31.156 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:31.156 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:31.156 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:31.156 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:31.156 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:31.156 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:31.156 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:31.156 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:31.415 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:31.415 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:31.415 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:31.415 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:31.415 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:31.415 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:31.415 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:31.415 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:31.415 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:31.415 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:17:31.415 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:17:31.415 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:17:31.415 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:17:31.415 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:31.415 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:17:31.415 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:31.415 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:17:31.415 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:31.415 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:17:31.415 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:31.415 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:17:31.415 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:31.415 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:31.415 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:31.415 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:17:31.674 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:31.674 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:31.674 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:17:31.674 /dev/nbd0 00:17:31.674 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:31.674 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:31.674 12:37:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:31.674 12:37:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:31.674 12:37:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:31.674 12:37:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:31.674 12:37:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:31.674 12:37:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:31.674 12:37:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:31.674 12:37:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:31.674 12:37:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:31.674 1+0 records in 00:17:31.674 1+0 records out 00:17:31.674 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292822 s, 14.0 MB/s 00:17:31.674 12:37:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.674 12:37:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:31.674 12:37:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.674 12:37:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:31.674 12:37:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:31.674 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:31.674 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:31.675 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:31.675 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:31.675 12:37:36 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:31.934 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:31.934 { 00:17:31.934 "nbd_device": "/dev/nbd0", 00:17:31.934 "bdev_name": "raid5f" 00:17:31.934 } 00:17:31.934 ]' 00:17:31.934 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:31.934 { 00:17:31.934 "nbd_device": "/dev/nbd0", 00:17:31.934 "bdev_name": "raid5f" 00:17:31.934 } 00:17:31.934 ]' 00:17:31.934 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:32.193 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:17:32.193 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:32.193 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:17:32.193 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:17:32.193 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:17:32.193 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:17:32.193 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:17:32.193 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:17:32.193 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:17:32.193 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:32.193 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:32.193 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:32.193 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:32.193 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:17:32.193 256+0 records in 00:17:32.193 256+0 records out 00:17:32.193 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145254 s, 72.2 MB/s 00:17:32.193 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:32.193 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:32.193 256+0 records in 00:17:32.193 256+0 records out 00:17:32.193 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.033732 s, 31.1 MB/s 00:17:32.193 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:17:32.193 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:17:32.193 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:32.193 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:32.193 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:32.193 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:32.193 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:32.193 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:32.193 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:17:32.193 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:32.193 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:32.194 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:32.194 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:32.194 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:32.194 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:32.194 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:32.194 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:32.453 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:32.453 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:32.453 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:32.453 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:32.453 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:32.453 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:32.453 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:32.453 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:32.453 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:32.453 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:32.453 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:32.713 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:32.713 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:32.713 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:32.713 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:32.713 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:32.713 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:32.713 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:32.713 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:32.713 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:32.713 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:17:32.713 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:32.713 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:17:32.713 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:32.713 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:32.713 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:17:32.713 12:37:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:17:32.972 malloc_lvol_verify 00:17:32.972 12:37:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:17:33.231 8efd2924-e6d0-49b1-990e-d7a23f05e8c5 00:17:33.231 12:37:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:17:33.231 c9ea3675-a619-4fe7-ba5d-cc3a1539f538 00:17:33.490 12:37:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:17:33.490 /dev/nbd0 00:17:33.490 12:37:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:17:33.490 12:37:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:17:33.490 12:37:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:17:33.490 12:37:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:17:33.490 12:37:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:17:33.490 mke2fs 1.47.0 (5-Feb-2023) 00:17:33.490 Discarding device blocks: 0/4096 done 00:17:33.490 Creating filesystem with 4096 1k blocks and 1024 inodes 00:17:33.490 00:17:33.490 Allocating group tables: 0/1 done 00:17:33.490 Writing inode tables: 0/1 done 00:17:33.490 Creating journal (1024 blocks): done 00:17:33.490 Writing superblocks and filesystem accounting information: 0/1 done 00:17:33.490 00:17:33.490 12:37:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:33.490 12:37:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:33.490 12:37:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:33.490 12:37:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:33.490 12:37:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:33.491 12:37:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:33.491 12:37:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:33.750 12:37:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:33.750 12:37:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:33.750 12:37:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:33.750 12:37:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:33.750 12:37:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:33.750 12:37:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:33.750 12:37:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:33.750 12:37:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:33.750 12:37:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 100583 00:17:33.750 12:37:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 100583 ']' 00:17:33.750 12:37:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 100583 00:17:33.750 12:37:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:17:33.750 12:37:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:33.750 12:37:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100583 00:17:34.020 12:37:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:34.020 killing process with pid 100583 00:17:34.020 12:37:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:34.020 12:37:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100583' 00:17:34.020 12:37:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 100583 00:17:34.020 12:37:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 100583 00:17:34.280 12:37:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:17:34.280 00:17:34.280 real 0m4.654s 00:17:34.280 user 0m6.806s 00:17:34.280 sys 0m1.340s 00:17:34.280 12:37:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:34.280 12:37:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:34.280 ************************************ 00:17:34.280 END TEST bdev_nbd 00:17:34.280 ************************************ 00:17:34.280 12:37:39 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:17:34.280 12:37:39 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:17:34.280 12:37:39 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:17:34.280 12:37:39 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:17:34.280 12:37:39 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:34.280 12:37:39 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:34.280 12:37:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:34.280 ************************************ 00:17:34.280 START TEST bdev_fio 00:17:34.280 ************************************ 00:17:34.280 12:37:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:17:34.280 12:37:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:17:34.280 12:37:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:17:34.280 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:17:34.280 12:37:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:17:34.280 12:37:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:17:34.280 12:37:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:17:34.280 12:37:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:17:34.280 12:37:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:17:34.280 12:37:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:34.280 12:37:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:17:34.280 12:37:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:17:34.280 12:37:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:17:34.280 12:37:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:17:34.280 12:37:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:34.280 12:37:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:17:34.280 12:37:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:17:34.280 12:37:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:34.280 12:37:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:17:34.280 12:37:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:17:34.280 12:37:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:17:34.280 12:37:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:17:34.280 12:37:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:17:34.280 12:37:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:17:34.280 12:37:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:17:34.280 12:37:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:34.280 12:37:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:17:34.280 12:37:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:17:34.280 12:37:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:17:34.280 12:37:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:34.280 12:37:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:17:34.280 12:37:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:34.280 12:37:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:34.280 ************************************ 00:17:34.280 START TEST bdev_fio_rw_verify 00:17:34.280 ************************************ 00:17:34.281 12:37:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:34.281 12:37:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:34.281 12:37:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:34.281 12:37:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:34.281 12:37:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:34.281 12:37:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:34.281 12:37:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:17:34.281 12:37:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:34.281 12:37:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:34.281 12:37:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:34.281 12:37:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:17:34.281 12:37:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:34.540 12:37:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:34.540 12:37:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:34.540 12:37:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:17:34.540 12:37:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:34.540 12:37:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:34.540 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:34.540 fio-3.35 00:17:34.540 Starting 1 thread 00:17:46.765 00:17:46.765 job_raid5f: (groupid=0, jobs=1): err= 0: pid=100773: Tue Nov 19 12:37:50 2024 00:17:46.765 read: IOPS=11.5k, BW=44.8MiB/s (46.9MB/s)(448MiB/10000msec) 00:17:46.765 slat (usec): min=18, max=128, avg=20.66, stdev= 2.24 00:17:46.765 clat (usec): min=10, max=389, avg=138.74, stdev=49.76 00:17:46.765 lat (usec): min=30, max=413, avg=159.40, stdev=50.11 00:17:46.765 clat percentiles (usec): 00:17:46.765 | 50.000th=[ 143], 99.000th=[ 241], 99.900th=[ 269], 99.990th=[ 322], 00:17:46.765 | 99.999th=[ 367] 00:17:46.765 write: IOPS=12.0k, BW=46.8MiB/s (49.1MB/s)(462MiB/9869msec); 0 zone resets 00:17:46.765 slat (usec): min=7, max=329, avg=18.07, stdev= 4.36 00:17:46.765 clat (usec): min=60, max=1737, avg=319.61, stdev=49.98 00:17:46.765 lat (usec): min=77, max=2067, avg=337.67, stdev=51.50 00:17:46.765 clat percentiles (usec): 00:17:46.765 | 50.000th=[ 322], 99.000th=[ 429], 99.900th=[ 644], 99.990th=[ 1483], 00:17:46.765 | 99.999th=[ 1713] 00:17:46.765 bw ( KiB/s): min=43256, max=50696, per=98.87%, avg=47396.53, stdev=2014.68, samples=19 00:17:46.765 iops : min=10814, max=12674, avg=11849.11, stdev=503.70, samples=19 00:17:46.765 lat (usec) : 20=0.01%, 50=0.01%, 100=12.08%, 250=40.12%, 500=47.70% 00:17:46.765 lat (usec) : 750=0.05%, 1000=0.02% 00:17:46.765 lat (msec) : 2=0.02% 00:17:46.765 cpu : usr=98.89%, sys=0.45%, ctx=30, majf=0, minf=12542 00:17:46.765 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:46.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:46.765 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:46.765 issued rwts: total=114613,118271,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:46.765 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:46.765 00:17:46.765 Run status group 0 (all jobs): 00:17:46.765 READ: bw=44.8MiB/s (46.9MB/s), 44.8MiB/s-44.8MiB/s (46.9MB/s-46.9MB/s), io=448MiB (469MB), run=10000-10000msec 00:17:46.765 WRITE: bw=46.8MiB/s (49.1MB/s), 46.8MiB/s-46.8MiB/s (49.1MB/s-49.1MB/s), io=462MiB (484MB), run=9869-9869msec 00:17:46.765 ----------------------------------------------------- 00:17:46.765 Suppressions used: 00:17:46.765 count bytes template 00:17:46.765 1 7 /usr/src/fio/parse.c 00:17:46.765 104 9984 /usr/src/fio/iolog.c 00:17:46.765 1 8 libtcmalloc_minimal.so 00:17:46.765 1 904 libcrypto.so 00:17:46.765 ----------------------------------------------------- 00:17:46.765 00:17:46.765 00:17:46.765 real 0m11.206s 00:17:46.765 user 0m11.414s 00:17:46.765 sys 0m0.661s 00:17:46.765 12:37:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:46.765 12:37:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:46.765 ************************************ 00:17:46.765 END TEST bdev_fio_rw_verify 00:17:46.765 ************************************ 00:17:46.765 12:37:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:17:46.765 12:37:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:46.765 12:37:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:46.766 12:37:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:46.766 12:37:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:17:46.766 12:37:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:17:46.766 12:37:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:17:46.766 12:37:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:17:46.766 12:37:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:46.766 12:37:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:17:46.766 12:37:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:17:46.766 12:37:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:46.766 12:37:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:17:46.766 12:37:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:17:46.766 12:37:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:17:46.766 12:37:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:17:46.766 12:37:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "845d4bb3-74ba-4044-8351-e01ec07d7794"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "845d4bb3-74ba-4044-8351-e01ec07d7794",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "845d4bb3-74ba-4044-8351-e01ec07d7794",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "7402d8d7-48b0-4b28-92e5-fa066a68880b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "609f932a-7f46-47f8-b379-b52318748554",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "b9128d64-fd9c-42f7-8d4a-746846d39994",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:17:46.766 12:37:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:46.766 12:37:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:17:46.766 12:37:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:46.766 12:37:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:17:46.766 /home/vagrant/spdk_repo/spdk 00:17:46.766 12:37:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:17:46.766 12:37:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:17:46.766 00:17:46.766 real 0m11.466s 00:17:46.766 user 0m11.526s 00:17:46.766 sys 0m0.791s 00:17:46.766 12:37:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:46.766 12:37:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:46.766 ************************************ 00:17:46.766 END TEST bdev_fio 00:17:46.766 ************************************ 00:17:46.766 12:37:50 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:46.766 12:37:50 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:46.766 12:37:50 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:17:46.766 12:37:50 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:46.766 12:37:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:46.766 ************************************ 00:17:46.766 START TEST bdev_verify 00:17:46.766 ************************************ 00:17:46.766 12:37:50 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:46.766 [2024-11-19 12:37:50.983562] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:46.766 [2024-11-19 12:37:50.983704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100921 ] 00:17:46.766 [2024-11-19 12:37:51.146183] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:46.766 [2024-11-19 12:37:51.199352] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.766 [2024-11-19 12:37:51.199457] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.766 Running I/O for 5 seconds... 00:17:48.276 15469.00 IOPS, 60.43 MiB/s [2024-11-19T12:37:54.475Z] 16044.00 IOPS, 62.67 MiB/s [2024-11-19T12:37:55.413Z] 16346.00 IOPS, 63.85 MiB/s [2024-11-19T12:37:56.791Z] 14999.75 IOPS, 58.59 MiB/s [2024-11-19T12:37:56.791Z] 14125.60 IOPS, 55.18 MiB/s 00:17:51.531 Latency(us) 00:17:51.531 [2024-11-19T12:37:56.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.531 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:51.531 Verification LBA range: start 0x0 length 0x2000 00:17:51.531 raid5f : 5.03 6678.70 26.09 0.00 0.00 28724.47 142.20 31594.65 00:17:51.531 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:51.531 Verification LBA range: start 0x2000 length 0x2000 00:17:51.531 raid5f : 5.01 7417.56 28.97 0.00 0.00 25983.51 348.79 36173.58 00:17:51.531 [2024-11-19T12:37:56.792Z] =================================================================================================================== 00:17:51.531 [2024-11-19T12:37:56.792Z] Total : 14096.27 55.06 0.00 0.00 27283.79 142.20 36173.58 00:17:51.531 00:17:51.531 real 0m5.769s 00:17:51.531 user 0m10.715s 00:17:51.531 sys 0m0.230s 00:17:51.531 12:37:56 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:51.531 12:37:56 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:51.531 ************************************ 00:17:51.531 END TEST bdev_verify 00:17:51.531 ************************************ 00:17:51.531 12:37:56 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:51.531 12:37:56 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:17:51.531 12:37:56 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:51.531 12:37:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:51.531 ************************************ 00:17:51.531 START TEST bdev_verify_big_io 00:17:51.531 ************************************ 00:17:51.531 12:37:56 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:51.789 [2024-11-19 12:37:56.833737] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:51.790 [2024-11-19 12:37:56.833921] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101003 ] 00:17:51.790 [2024-11-19 12:37:56.999044] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:52.049 [2024-11-19 12:37:57.053128] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.049 [2024-11-19 12:37:57.053261] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.049 Running I/O for 5 seconds... 00:17:54.364 633.00 IOPS, 39.56 MiB/s [2024-11-19T12:38:00.562Z] 696.50 IOPS, 43.53 MiB/s [2024-11-19T12:38:01.500Z] 697.33 IOPS, 43.58 MiB/s [2024-11-19T12:38:02.438Z] 729.75 IOPS, 45.61 MiB/s [2024-11-19T12:38:02.697Z] 735.80 IOPS, 45.99 MiB/s 00:17:57.436 Latency(us) 00:17:57.436 [2024-11-19T12:38:02.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.436 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:57.436 Verification LBA range: start 0x0 length 0x200 00:17:57.436 raid5f : 5.26 338.22 21.14 0.00 0.00 9338569.75 183.34 402946.24 00:17:57.436 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:57.437 Verification LBA range: start 0x200 length 0x200 00:17:57.437 raid5f : 5.16 418.69 26.17 0.00 0.00 7634637.51 227.16 327851.71 00:17:57.437 [2024-11-19T12:38:02.698Z] =================================================================================================================== 00:17:57.437 [2024-11-19T12:38:02.698Z] Total : 756.91 47.31 0.00 0.00 8404155.30 183.34 402946.24 00:17:57.697 00:17:57.697 real 0m6.026s 00:17:57.697 user 0m11.167s 00:17:57.697 sys 0m0.255s 00:17:57.697 12:38:02 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:57.697 12:38:02 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:57.697 ************************************ 00:17:57.698 END TEST bdev_verify_big_io 00:17:57.698 ************************************ 00:17:57.698 12:38:02 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:57.698 12:38:02 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:57.698 12:38:02 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:57.698 12:38:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:57.698 ************************************ 00:17:57.698 START TEST bdev_write_zeroes 00:17:57.698 ************************************ 00:17:57.698 12:38:02 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:57.698 [2024-11-19 12:38:02.926969] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:57.698 [2024-11-19 12:38:02.927144] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101085 ] 00:17:57.958 [2024-11-19 12:38:03.093686] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.958 [2024-11-19 12:38:03.149292] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.227 Running I/O for 1 seconds... 00:17:59.177 28959.00 IOPS, 113.12 MiB/s 00:17:59.177 Latency(us) 00:17:59.177 [2024-11-19T12:38:04.438Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.177 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:59.177 raid5f : 1.01 28916.08 112.95 0.00 0.00 4413.20 1309.29 6524.98 00:17:59.177 [2024-11-19T12:38:04.438Z] =================================================================================================================== 00:17:59.177 [2024-11-19T12:38:04.438Z] Total : 28916.08 112.95 0.00 0.00 4413.20 1309.29 6524.98 00:17:59.438 00:17:59.438 real 0m1.765s 00:17:59.438 user 0m1.413s 00:17:59.438 sys 0m0.231s 00:17:59.438 12:38:04 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:59.438 12:38:04 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:59.438 ************************************ 00:17:59.438 END TEST bdev_write_zeroes 00:17:59.438 ************************************ 00:17:59.438 12:38:04 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:59.438 12:38:04 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:59.438 12:38:04 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:59.438 12:38:04 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:59.438 ************************************ 00:17:59.438 START TEST bdev_json_nonenclosed 00:17:59.438 ************************************ 00:17:59.438 12:38:04 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:59.699 [2024-11-19 12:38:04.766191] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:59.699 [2024-11-19 12:38:04.766311] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101121 ] 00:17:59.699 [2024-11-19 12:38:04.926508] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.959 [2024-11-19 12:38:04.980729] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.959 [2024-11-19 12:38:04.980845] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:59.959 [2024-11-19 12:38:04.980877] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:59.959 [2024-11-19 12:38:04.980895] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:59.959 00:17:59.959 real 0m0.420s 00:17:59.959 user 0m0.187s 00:17:59.959 sys 0m0.130s 00:17:59.959 12:38:05 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:59.959 12:38:05 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:59.959 ************************************ 00:17:59.959 END TEST bdev_json_nonenclosed 00:17:59.959 ************************************ 00:17:59.959 12:38:05 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:59.959 12:38:05 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:59.959 12:38:05 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:59.959 12:38:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:59.959 ************************************ 00:17:59.959 START TEST bdev_json_nonarray 00:17:59.959 ************************************ 00:17:59.959 12:38:05 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:00.220 [2024-11-19 12:38:05.259043] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:00.220 [2024-11-19 12:38:05.259169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101147 ] 00:18:00.220 [2024-11-19 12:38:05.415817] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.220 [2024-11-19 12:38:05.463528] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.220 [2024-11-19 12:38:05.463654] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:18:00.220 [2024-11-19 12:38:05.463684] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:00.220 [2024-11-19 12:38:05.463703] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:00.480 00:18:00.480 real 0m0.405s 00:18:00.480 user 0m0.172s 00:18:00.480 sys 0m0.129s 00:18:00.480 12:38:05 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:00.480 12:38:05 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:18:00.480 ************************************ 00:18:00.480 END TEST bdev_json_nonarray 00:18:00.480 ************************************ 00:18:00.480 12:38:05 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:18:00.480 12:38:05 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:18:00.480 12:38:05 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:18:00.480 12:38:05 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:18:00.480 12:38:05 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:18:00.480 12:38:05 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:18:00.480 12:38:05 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:00.480 12:38:05 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:18:00.480 12:38:05 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:18:00.480 12:38:05 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:18:00.480 12:38:05 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:18:00.480 00:18:00.480 real 0m35.352s 00:18:00.480 user 0m48.172s 00:18:00.480 sys 0m4.754s 00:18:00.480 12:38:05 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:00.480 12:38:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:00.480 ************************************ 00:18:00.480 END TEST blockdev_raid5f 00:18:00.480 ************************************ 00:18:00.480 12:38:05 -- spdk/autotest.sh@194 -- # uname -s 00:18:00.481 12:38:05 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:18:00.481 12:38:05 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:18:00.481 12:38:05 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:18:00.481 12:38:05 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:18:00.481 12:38:05 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:18:00.481 12:38:05 -- spdk/autotest.sh@256 -- # timing_exit lib 00:18:00.481 12:38:05 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:00.481 12:38:05 -- common/autotest_common.sh@10 -- # set +x 00:18:00.741 12:38:05 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:18:00.741 12:38:05 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:18:00.741 12:38:05 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:18:00.741 12:38:05 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:18:00.741 12:38:05 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:00.741 12:38:05 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:00.741 12:38:05 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:18:00.741 12:38:05 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:18:00.741 12:38:05 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:18:00.741 12:38:05 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:18:00.741 12:38:05 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:18:00.741 12:38:05 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:18:00.741 12:38:05 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:18:00.741 12:38:05 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:18:00.741 12:38:05 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:18:00.741 12:38:05 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:18:00.741 12:38:05 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:18:00.741 12:38:05 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:18:00.741 12:38:05 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:18:00.741 12:38:05 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:18:00.741 12:38:05 -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:00.741 12:38:05 -- common/autotest_common.sh@10 -- # set +x 00:18:00.741 12:38:05 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:18:00.741 12:38:05 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:18:00.741 12:38:05 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:18:00.741 12:38:05 -- common/autotest_common.sh@10 -- # set +x 00:18:02.649 INFO: APP EXITING 00:18:02.649 INFO: killing all VMs 00:18:02.649 INFO: killing vhost app 00:18:02.649 INFO: EXIT DONE 00:18:03.218 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:03.219 Waiting for block devices as requested 00:18:03.219 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:03.479 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:04.419 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:04.419 Cleaning 00:18:04.419 Removing: /var/run/dpdk/spdk0/config 00:18:04.419 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:18:04.419 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:18:04.419 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:18:04.419 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:18:04.419 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:18:04.419 Removing: /var/run/dpdk/spdk0/hugepage_info 00:18:04.419 Removing: /dev/shm/spdk_tgt_trace.pid69262 00:18:04.419 Removing: /var/run/dpdk/spdk0 00:18:04.419 Removing: /var/run/dpdk/spdk_pid100207 00:18:04.419 Removing: /var/run/dpdk/spdk_pid100464 00:18:04.419 Removing: /var/run/dpdk/spdk_pid100509 00:18:04.419 Removing: /var/run/dpdk/spdk_pid100535 00:18:04.419 Removing: /var/run/dpdk/spdk_pid100759 00:18:04.419 Removing: /var/run/dpdk/spdk_pid100921 00:18:04.419 Removing: /var/run/dpdk/spdk_pid101003 00:18:04.419 Removing: /var/run/dpdk/spdk_pid101085 00:18:04.419 Removing: /var/run/dpdk/spdk_pid101121 00:18:04.419 Removing: /var/run/dpdk/spdk_pid101147 00:18:04.419 Removing: /var/run/dpdk/spdk_pid69097 00:18:04.419 Removing: /var/run/dpdk/spdk_pid69262 00:18:04.419 Removing: /var/run/dpdk/spdk_pid69469 00:18:04.419 Removing: /var/run/dpdk/spdk_pid69551 00:18:04.419 Removing: /var/run/dpdk/spdk_pid69579 00:18:04.419 Removing: /var/run/dpdk/spdk_pid69691 00:18:04.419 Removing: /var/run/dpdk/spdk_pid69709 00:18:04.419 Removing: /var/run/dpdk/spdk_pid69897 00:18:04.419 Removing: /var/run/dpdk/spdk_pid69970 00:18:04.419 Removing: /var/run/dpdk/spdk_pid70050 00:18:04.419 Removing: /var/run/dpdk/spdk_pid70150 00:18:04.419 Removing: /var/run/dpdk/spdk_pid70236 00:18:04.419 Removing: /var/run/dpdk/spdk_pid70281 00:18:04.419 Removing: /var/run/dpdk/spdk_pid70312 00:18:04.419 Removing: /var/run/dpdk/spdk_pid70388 00:18:04.419 Removing: /var/run/dpdk/spdk_pid70494 00:18:04.419 Removing: /var/run/dpdk/spdk_pid70921 00:18:04.419 Removing: /var/run/dpdk/spdk_pid70969 00:18:04.419 Removing: /var/run/dpdk/spdk_pid71021 00:18:04.419 Removing: /var/run/dpdk/spdk_pid71038 00:18:04.419 Removing: /var/run/dpdk/spdk_pid71096 00:18:04.420 Removing: /var/run/dpdk/spdk_pid71112 00:18:04.420 Removing: /var/run/dpdk/spdk_pid71181 00:18:04.420 Removing: /var/run/dpdk/spdk_pid71197 00:18:04.420 Removing: /var/run/dpdk/spdk_pid71252 00:18:04.420 Removing: /var/run/dpdk/spdk_pid71259 00:18:04.420 Removing: /var/run/dpdk/spdk_pid71314 00:18:04.420 Removing: /var/run/dpdk/spdk_pid71331 00:18:04.420 Removing: /var/run/dpdk/spdk_pid71469 00:18:04.420 Removing: /var/run/dpdk/spdk_pid71501 00:18:04.420 Removing: /var/run/dpdk/spdk_pid71585 00:18:04.420 Removing: /var/run/dpdk/spdk_pid72755 00:18:04.680 Removing: /var/run/dpdk/spdk_pid72950 00:18:04.680 Removing: /var/run/dpdk/spdk_pid73090 00:18:04.680 Removing: /var/run/dpdk/spdk_pid73695 00:18:04.680 Removing: /var/run/dpdk/spdk_pid73895 00:18:04.680 Removing: /var/run/dpdk/spdk_pid74024 00:18:04.680 Removing: /var/run/dpdk/spdk_pid74629 00:18:04.680 Removing: /var/run/dpdk/spdk_pid74948 00:18:04.680 Removing: /var/run/dpdk/spdk_pid75077 00:18:04.680 Removing: /var/run/dpdk/spdk_pid76418 00:18:04.680 Removing: /var/run/dpdk/spdk_pid76660 00:18:04.680 Removing: /var/run/dpdk/spdk_pid76789 00:18:04.680 Removing: /var/run/dpdk/spdk_pid78130 00:18:04.680 Removing: /var/run/dpdk/spdk_pid78372 00:18:04.680 Removing: /var/run/dpdk/spdk_pid78501 00:18:04.680 Removing: /var/run/dpdk/spdk_pid79842 00:18:04.680 Removing: /var/run/dpdk/spdk_pid80274 00:18:04.680 Removing: /var/run/dpdk/spdk_pid80406 00:18:04.680 Removing: /var/run/dpdk/spdk_pid81836 00:18:04.680 Removing: /var/run/dpdk/spdk_pid82084 00:18:04.680 Removing: /var/run/dpdk/spdk_pid82224 00:18:04.680 Removing: /var/run/dpdk/spdk_pid83654 00:18:04.680 Removing: /var/run/dpdk/spdk_pid83902 00:18:04.680 Removing: /var/run/dpdk/spdk_pid84038 00:18:04.680 Removing: /var/run/dpdk/spdk_pid85469 00:18:04.680 Removing: /var/run/dpdk/spdk_pid85934 00:18:04.680 Removing: /var/run/dpdk/spdk_pid86073 00:18:04.680 Removing: /var/run/dpdk/spdk_pid86201 00:18:04.680 Removing: /var/run/dpdk/spdk_pid86607 00:18:04.680 Removing: /var/run/dpdk/spdk_pid87324 00:18:04.680 Removing: /var/run/dpdk/spdk_pid87711 00:18:04.680 Removing: /var/run/dpdk/spdk_pid88383 00:18:04.680 Removing: /var/run/dpdk/spdk_pid88820 00:18:04.680 Removing: /var/run/dpdk/spdk_pid89560 00:18:04.680 Removing: /var/run/dpdk/spdk_pid89957 00:18:04.680 Removing: /var/run/dpdk/spdk_pid91872 00:18:04.680 Removing: /var/run/dpdk/spdk_pid92309 00:18:04.680 Removing: /var/run/dpdk/spdk_pid92730 00:18:04.680 Removing: /var/run/dpdk/spdk_pid94773 00:18:04.680 Removing: /var/run/dpdk/spdk_pid95247 00:18:04.680 Removing: /var/run/dpdk/spdk_pid95728 00:18:04.680 Removing: /var/run/dpdk/spdk_pid96759 00:18:04.680 Removing: /var/run/dpdk/spdk_pid97076 00:18:04.680 Removing: /var/run/dpdk/spdk_pid97992 00:18:04.680 Removing: /var/run/dpdk/spdk_pid98305 00:18:04.680 Removing: /var/run/dpdk/spdk_pid99226 00:18:04.680 Removing: /var/run/dpdk/spdk_pid99543 00:18:04.680 Clean 00:18:04.940 12:38:09 -- common/autotest_common.sh@1451 -- # return 0 00:18:04.940 12:38:09 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:18:04.940 12:38:09 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:04.940 12:38:09 -- common/autotest_common.sh@10 -- # set +x 00:18:04.940 12:38:10 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:18:04.940 12:38:10 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:04.940 12:38:10 -- common/autotest_common.sh@10 -- # set +x 00:18:04.940 12:38:10 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:18:04.940 12:38:10 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:18:04.940 12:38:10 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:18:04.940 12:38:10 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:18:04.940 12:38:10 -- spdk/autotest.sh@394 -- # hostname 00:18:04.940 12:38:10 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:18:05.200 geninfo: WARNING: invalid characters removed from testname! 00:18:31.757 12:38:34 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:32.327 12:38:37 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:34.867 12:38:39 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:36.884 12:38:41 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:38.793 12:38:43 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:40.702 12:38:45 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:43.246 12:38:48 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:18:43.246 12:38:48 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:18:43.246 12:38:48 -- common/autotest_common.sh@1681 -- $ lcov --version 00:18:43.246 12:38:48 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:18:43.246 12:38:48 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:18:43.246 12:38:48 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:18:43.246 12:38:48 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:18:43.246 12:38:48 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:18:43.246 12:38:48 -- scripts/common.sh@336 -- $ IFS=.-: 00:18:43.246 12:38:48 -- scripts/common.sh@336 -- $ read -ra ver1 00:18:43.246 12:38:48 -- scripts/common.sh@337 -- $ IFS=.-: 00:18:43.246 12:38:48 -- scripts/common.sh@337 -- $ read -ra ver2 00:18:43.246 12:38:48 -- scripts/common.sh@338 -- $ local 'op=<' 00:18:43.246 12:38:48 -- scripts/common.sh@340 -- $ ver1_l=2 00:18:43.246 12:38:48 -- scripts/common.sh@341 -- $ ver2_l=1 00:18:43.246 12:38:48 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:18:43.246 12:38:48 -- scripts/common.sh@344 -- $ case "$op" in 00:18:43.246 12:38:48 -- scripts/common.sh@345 -- $ : 1 00:18:43.246 12:38:48 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:18:43.246 12:38:48 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:43.246 12:38:48 -- scripts/common.sh@365 -- $ decimal 1 00:18:43.246 12:38:48 -- scripts/common.sh@353 -- $ local d=1 00:18:43.246 12:38:48 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:18:43.246 12:38:48 -- scripts/common.sh@355 -- $ echo 1 00:18:43.246 12:38:48 -- scripts/common.sh@365 -- $ ver1[v]=1 00:18:43.246 12:38:48 -- scripts/common.sh@366 -- $ decimal 2 00:18:43.246 12:38:48 -- scripts/common.sh@353 -- $ local d=2 00:18:43.246 12:38:48 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:18:43.246 12:38:48 -- scripts/common.sh@355 -- $ echo 2 00:18:43.246 12:38:48 -- scripts/common.sh@366 -- $ ver2[v]=2 00:18:43.246 12:38:48 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:18:43.246 12:38:48 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:18:43.246 12:38:48 -- scripts/common.sh@368 -- $ return 0 00:18:43.246 12:38:48 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:43.246 12:38:48 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:18:43.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.246 --rc genhtml_branch_coverage=1 00:18:43.246 --rc genhtml_function_coverage=1 00:18:43.246 --rc genhtml_legend=1 00:18:43.246 --rc geninfo_all_blocks=1 00:18:43.246 --rc geninfo_unexecuted_blocks=1 00:18:43.246 00:18:43.246 ' 00:18:43.246 12:38:48 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:18:43.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.246 --rc genhtml_branch_coverage=1 00:18:43.246 --rc genhtml_function_coverage=1 00:18:43.246 --rc genhtml_legend=1 00:18:43.246 --rc geninfo_all_blocks=1 00:18:43.246 --rc geninfo_unexecuted_blocks=1 00:18:43.246 00:18:43.246 ' 00:18:43.246 12:38:48 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:18:43.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.246 --rc genhtml_branch_coverage=1 00:18:43.246 --rc genhtml_function_coverage=1 00:18:43.246 --rc genhtml_legend=1 00:18:43.246 --rc geninfo_all_blocks=1 00:18:43.246 --rc geninfo_unexecuted_blocks=1 00:18:43.246 00:18:43.246 ' 00:18:43.246 12:38:48 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:18:43.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.246 --rc genhtml_branch_coverage=1 00:18:43.246 --rc genhtml_function_coverage=1 00:18:43.246 --rc genhtml_legend=1 00:18:43.246 --rc geninfo_all_blocks=1 00:18:43.246 --rc geninfo_unexecuted_blocks=1 00:18:43.246 00:18:43.246 ' 00:18:43.246 12:38:48 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:43.246 12:38:48 -- scripts/common.sh@15 -- $ shopt -s extglob 00:18:43.246 12:38:48 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:18:43.246 12:38:48 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:43.246 12:38:48 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:43.246 12:38:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.246 12:38:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.246 12:38:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.246 12:38:48 -- paths/export.sh@5 -- $ export PATH 00:18:43.246 12:38:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.246 12:38:48 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:18:43.246 12:38:48 -- common/autobuild_common.sh@479 -- $ date +%s 00:18:43.246 12:38:48 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1732019928.XXXXXX 00:18:43.246 12:38:48 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1732019928.2jaO8B 00:18:43.246 12:38:48 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:18:43.246 12:38:48 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:18:43.246 12:38:48 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:18:43.246 12:38:48 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:18:43.246 12:38:48 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:18:43.246 12:38:48 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:18:43.246 12:38:48 -- common/autobuild_common.sh@495 -- $ get_config_params 00:18:43.246 12:38:48 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:18:43.246 12:38:48 -- common/autotest_common.sh@10 -- $ set +x 00:18:43.246 12:38:48 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:18:43.246 12:38:48 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:18:43.246 12:38:48 -- pm/common@17 -- $ local monitor 00:18:43.246 12:38:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:43.246 12:38:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:43.246 12:38:48 -- pm/common@25 -- $ sleep 1 00:18:43.246 12:38:48 -- pm/common@21 -- $ date +%s 00:18:43.246 12:38:48 -- pm/common@21 -- $ date +%s 00:18:43.246 12:38:48 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1732019928 00:18:43.246 12:38:48 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1732019928 00:18:43.246 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1732019928_collect-vmstat.pm.log 00:18:43.246 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1732019928_collect-cpu-load.pm.log 00:18:44.188 12:38:49 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:18:44.188 12:38:49 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:18:44.188 12:38:49 -- spdk/autopackage.sh@14 -- $ timing_finish 00:18:44.188 12:38:49 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:18:44.188 12:38:49 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:18:44.188 12:38:49 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:18:44.188 12:38:49 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:18:44.188 12:38:49 -- pm/common@29 -- $ signal_monitor_resources TERM 00:18:44.188 12:38:49 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:18:44.188 12:38:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:44.188 12:38:49 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:18:44.188 12:38:49 -- pm/common@44 -- $ pid=102665 00:18:44.188 12:38:49 -- pm/common@50 -- $ kill -TERM 102665 00:18:44.188 12:38:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:44.188 12:38:49 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:18:44.188 12:38:49 -- pm/common@44 -- $ pid=102667 00:18:44.188 12:38:49 -- pm/common@50 -- $ kill -TERM 102667 00:18:44.188 + [[ -n 6163 ]] 00:18:44.188 + sudo kill 6163 00:18:44.198 [Pipeline] } 00:18:44.214 [Pipeline] // timeout 00:18:44.219 [Pipeline] } 00:18:44.233 [Pipeline] // stage 00:18:44.238 [Pipeline] } 00:18:44.256 [Pipeline] // catchError 00:18:44.265 [Pipeline] stage 00:18:44.268 [Pipeline] { (Stop VM) 00:18:44.281 [Pipeline] sh 00:18:44.564 + vagrant halt 00:18:47.105 ==> default: Halting domain... 00:18:55.253 [Pipeline] sh 00:18:55.537 + vagrant destroy -f 00:18:58.077 ==> default: Removing domain... 00:18:58.090 [Pipeline] sh 00:18:58.375 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:18:58.385 [Pipeline] } 00:18:58.400 [Pipeline] // stage 00:18:58.405 [Pipeline] } 00:18:58.420 [Pipeline] // dir 00:18:58.425 [Pipeline] } 00:18:58.440 [Pipeline] // wrap 00:18:58.447 [Pipeline] } 00:18:58.460 [Pipeline] // catchError 00:18:58.469 [Pipeline] stage 00:18:58.472 [Pipeline] { (Epilogue) 00:18:58.485 [Pipeline] sh 00:18:58.770 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:19:02.980 [Pipeline] catchError 00:19:02.982 [Pipeline] { 00:19:02.993 [Pipeline] sh 00:19:03.279 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:19:03.279 Artifacts sizes are good 00:19:03.289 [Pipeline] } 00:19:03.302 [Pipeline] // catchError 00:19:03.312 [Pipeline] archiveArtifacts 00:19:03.318 Archiving artifacts 00:19:03.472 [Pipeline] cleanWs 00:19:03.484 [WS-CLEANUP] Deleting project workspace... 00:19:03.484 [WS-CLEANUP] Deferred wipeout is used... 00:19:03.491 [WS-CLEANUP] done 00:19:03.493 [Pipeline] } 00:19:03.506 [Pipeline] // stage 00:19:03.510 [Pipeline] } 00:19:03.523 [Pipeline] // node 00:19:03.528 [Pipeline] End of Pipeline 00:19:03.566 Finished: SUCCESS